1
|
Elmalam N, Ben Nedava L, Zaritsky A. In silico labeling in cell biology: Potential and limitations. Curr Opin Cell Biol 2024; 89:102378. [PMID: 38838549 DOI: 10.1016/j.ceb.2024.102378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
In silico labeling is the computational cross-modality image translation where the output modality is a subcellular marker that is not specifically encoded in the input image, for example, in silico localization of organelles from transmitted light images. In principle, in silico labeling has the potential to facilitate rapid live imaging of multiple organelles with reduced photobleaching and phototoxicity, a technology enabling a major leap toward understanding the cell as an integrated complex system. However, five years have passed since feasibility was attained, without any demonstration of using in silico labeling to uncover new biological insight. In here, we discuss the current state of in silico labeling, the limitations preventing it from becoming a practical tool, and how we can overcome these limitations to reach its full potential.
Collapse
Affiliation(s)
- Nitsan Elmalam
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Lion Ben Nedava
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel.
| |
Collapse
|
2
|
Ma C, Tan W, He R, Yan B. Pretraining a foundation model for generalizable fluorescence microscopy-based image restoration. Nat Methods 2024; 21:1558-1567. [PMID: 38609490 DOI: 10.1038/s41592-024-02244-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 03/13/2024] [Indexed: 04/14/2024]
Abstract
Fluorescence microscopy-based image restoration has received widespread attention in the life sciences and has led to significant progress, benefiting from deep learning technology. However, most current task-specific methods have limited generalizability to different fluorescence microscopy-based image restoration problems. Here, we seek to improve generalizability and explore the potential of applying a pretrained foundation model to fluorescence microscopy-based image restoration. We provide a universal fluorescence microscopy-based image restoration (UniFMIR) model to address different restoration problems, and show that UniFMIR offers higher image restoration precision, better generalization and increased versatility. Demonstrations on five tasks and 14 datasets covering a wide range of microscopy imaging modalities and biological samples demonstrate that the pretrained UniFMIR can effectively transfer knowledge to a specific situation via fine-tuning, uncover clear nanoscale biomolecular structures and facilitate high-quality imaging. This work has the potential to inspire and trigger new research highlights for fluorescence microscopy-based image restoration.
Collapse
Affiliation(s)
- Chenxi Ma
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Weimin Tan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Ruian He
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Bo Yan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China.
| |
Collapse
|
3
|
Feng R, Li S, Zhang Y. AI-powered microscopy image analysis for parasitology: integrating human expertise. Trends Parasitol 2024; 40:633-646. [PMID: 38824067 DOI: 10.1016/j.pt.2024.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 05/06/2024] [Accepted: 05/07/2024] [Indexed: 06/03/2024]
Abstract
Microscopy image analysis plays a pivotal role in parasitology research. Deep learning (DL), a subset of artificial intelligence (AI), has garnered significant attention. However, traditional DL-based methods for general purposes are data-driven, often lacking explainability due to their black-box nature and sparse instructional resources. To address these challenges, this article presents a comprehensive review of recent advancements in knowledge-integrated DL models tailored for microscopy image analysis in parasitology. The massive amounts of human expert knowledge from parasitologists can enhance the accuracy and explainability of AI-driven decisions. It is expected that the adoption of knowledge-integrated DL models will open up a wide range of applications in the field of parasitology.
Collapse
Affiliation(s)
- Ruijun Feng
- College of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China; School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| | - Sen Li
- College of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China
| | - Yang Zhang
- College of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China.
| |
Collapse
|
4
|
Wang Q, Li Z, Zhang S, Chi N, Dai Q. A versatile Wavelet-Enhanced CNN-Transformer for improved fluorescence microscopy image restoration. Neural Netw 2024; 170:227-241. [PMID: 37992510 DOI: 10.1016/j.neunet.2023.11.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 11/06/2023] [Accepted: 11/17/2023] [Indexed: 11/24/2023]
Abstract
Fluorescence microscopes are indispensable tools for the life science research community. Nevertheless, the presence of optical component limitations, coupled with the maximum photon budget that the specimen can tolerate, inevitably leads to a decline in imaging quality and a lack of useful signals. Therefore, image restoration becomes essential for ensuring high-quality and accurate analyses. This paper presents the Wavelet-Enhanced Convolutional-Transformer (WECT), a novel deep learning technique developed specifically for the purpose of reducing noise in microscopy images and attaining super-resolution. Unlike traditional approaches, WECT integrates wavelet transform and inverse-transform for multi-resolution image decomposition and reconstruction, resulting in an expanded receptive field for the network without compromising information integrity. Subsequently, multiple consecutive parallel CNN-Transformer modules are utilized to collaboratively model local and global dependencies, thus facilitating the extraction of more comprehensive and diversified deep features. In addition, the incorporation of generative adversarial networks (GANs) into WECT enhances its capacity to generate high perceptual quality microscopic images. Extensive experiments have demonstrated that the WECT framework outperforms current state-of-the-art restoration methods on real fluorescence microscopy data under various imaging modalities and conditions, in terms of quantitative and qualitative analysis.
Collapse
Affiliation(s)
- Qinghua Wang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
| | - Ziwei Li
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Shanghai ERC of LEO Satellite Communication and Applications, Shanghai CIC of LEO Satellite Communication Technology, Fudan University, Shanghai, 200433, China; Pujiang Laboratory, Shanghai, China.
| | - Shuqi Zhang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
| | - Nan Chi
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Shanghai ERC of LEO Satellite Communication and Applications, Shanghai CIC of LEO Satellite Communication Technology, Fudan University, Shanghai, 200433, China; Shanghai Collaborative Innovation Center of Low-Earth-Orbit Satellite Communication Technology, Shanghai, 200433, China.
| | - Qionghai Dai
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Department of Automation, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
5
|
Sun H, Li J, Murphy RF. Expanding the coverage of spatial proteomics: a machine learning approach. Bioinformatics 2024; 40:btae062. [PMID: 38310340 PMCID: PMC10873576 DOI: 10.1093/bioinformatics/btae062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 02/15/2024] [Accepted: 02/15/2024] [Indexed: 02/05/2024] Open
Abstract
MOTIVATION Multiplexed protein imaging methods use a chosen set of markers and provide valuable information about complex tissue structure and cellular heterogeneity. However, the number of markers that can be measured in the same tissue sample is inherently limited. RESULTS In this paper, we present an efficient method to choose a minimal predictive subset of markers that for the first time allows the prediction of full images for a much larger set of markers. We demonstrate that our approach also outperforms previous methods for predicting cell-level protein composition. Most importantly, we demonstrate that our approach can be used to select a marker set that enables prediction of a much larger set than could be measured concurrently. AVAILABILITY AND IMPLEMENTATION All code and intermediate results are available in a Reproducible Research Archive at https://github.com/murphygroup/CODEXPanelOptimization.
Collapse
Affiliation(s)
- Huangqingbo Sun
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Jiayi Li
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Robert F Murphy
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| |
Collapse
|
6
|
Park R, Kang MS, Heo G, Shin YC, Han DW, Hong SW. Regulated Behavior in Living Cells with Highly Aligned Configurations on Nanowrinkled Graphene Oxide Substrates: Deep Learning Based on Interplay of Cellular Contact Guidance. ACS NANO 2024; 18:1325-1344. [PMID: 38099607 DOI: 10.1021/acsnano.2c09815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Micro-/nanotopographical cues have emerged as a practical and promising strategy for controlling cell fate and reprogramming, which play a key role as biophysical regulators in diverse cellular processes and behaviors. Extracellular biophysical factors can trigger intracellular physiological signaling via mechanotransduction and promote cellular responses such as cell adhesion, migration, proliferation, gene/protein expression, and differentiation. Here, we engineered a highly ordered nanowrinkled graphene oxide (GO) surface via the mechanical deformation of an ultrathin GO film on an elastomeric substrate to observe specific cellular responses based on surface-mediated topographical cues. The ultrathin GO film on the uniaxially prestrained elastomeric substrate through self-assembly and subsequent compressive force produced GO nanowrinkles with periodic amplitude. To examine the acute cellular behaviors on the GO-based cell interface with nanostructured arrays of wrinkles, we cultured L929 fibroblasts and HT22 hippocampal neuronal cells. As a result, our developed cell-culture substrate obviously provided a directional guidance effect. In addition, based on the observed results, we adapted a deep learning (DL)-based data processing technique to precisely interpret the cell behaviors on the nanowrinkled GO surfaces. According to the learning/transfer learning protocol of the DL network, we detected cell boundaries, elongation, and orientation and quantitatively evaluated cell velocity, traveling distance, displacement, and orientation. The presented experimental results have intriguing implications such that the nanotopographical microenvironment could engineer the living cells' morphological polarization to assemble them into useful tissue chips consisting of multiple cell types.
Collapse
Affiliation(s)
- Rowoon Park
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Moon Sung Kang
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Gyeonghwa Heo
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Yong Cheol Shin
- Department of Inflammation and Immunity, Lerner Research Institute, Cleveland Clinic, Ohio 44195, United States
| | - Dong-Wook Han
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Suck Won Hong
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
- Engineering Research Center for Color-Modulated Extra-Sensory Perception Technology, Pusan National University, Busan 46241, Republic of Korea
| |
Collapse
|
7
|
Li X, Hu X, Chen X, Fan J, Zhao Z, Wu J, Wang H, Dai Q. Spatial redundancy transformer for self-supervised fluorescence image denoising. NATURE COMPUTATIONAL SCIENCE 2023; 3:1067-1080. [PMID: 38177722 PMCID: PMC10766531 DOI: 10.1038/s43588-023-00568-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 11/07/2023] [Indexed: 01/06/2024]
Abstract
Fluorescence imaging with high signal-to-noise ratios has become the foundation of accurate visualization and analysis of biological phenomena. However, the inevitable noise poses a formidable challenge to imaging sensitivity. Here we provide the spatial redundancy denoising transformer (SRDTrans) to remove noise from fluorescence images in a self-supervised manner. First, a sampling strategy based on spatial redundancy is proposed to extract adjacent orthogonal training pairs, which eliminates the dependence on high imaging speed. Second, we designed a lightweight spatiotemporal transformer architecture to capture long-range dependencies and high-resolution features at low computational cost. SRDTrans can restore high-frequency information without producing oversmoothed structures and distorted fluorescence traces. Finally, we demonstrate the state-of-the-art denoising performance of SRDTrans on single-molecule localization microscopy and two-photon volumetric calcium imaging. SRDTrans does not contain any assumptions about the imaging process and the sample, thus can be easily extended to various imaging modalities and biological applications.
Collapse
Affiliation(s)
- Xinyang Li
- Department of Automation, Tsinghua University, Beijing, China
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Xiaowan Hu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Xingye Chen
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Jiaqi Fan
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
- Department of Electronic Engineering, Tsinghua University, Beijing, China
| | - Zhifeng Zhao
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension and Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Haoqian Wang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China.
- The Shenzhen Institute of Future Media Technology, Shenzhen, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension and Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| |
Collapse
|
8
|
Affiliation(s)
- Xinyang Li
- Department of Automation, Tsinghua University, Beijing, China
| | - Yuanlong Zhang
- Department of Automation, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
| |
Collapse
|
9
|
Abstract
Super-resolution fluorescence microscopy allows the investigation of cellular structures at nanoscale resolution using light. Current developments in super-resolution microscopy have focused on reliable quantification of the underlying biological data. In this review, we first describe the basic principles of super-resolution microscopy techniques such as stimulated emission depletion (STED) microscopy and single-molecule localization microscopy (SMLM), and then give a broad overview of methodological developments to quantify super-resolution data, particularly those geared toward SMLM data. We cover commonly used techniques such as spatial point pattern analysis, colocalization, and protein copy number quantification but also describe more advanced techniques such as structural modeling, single-particle tracking, and biosensing. Finally, we provide an outlook on exciting new research directions to which quantitative super-resolution microscopy might be applied.
Collapse
Affiliation(s)
- Siewert Hugelier
- Department of Physiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA; , ,
| | - P L Colosi
- Department of Physiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA; , ,
| | - Melike Lakadamyali
- Department of Physiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA; , ,
- Department of Cell and Developmental Biology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Epigenetics Institute, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
10
|
Sun H, Jiang Q, Huang Y, Mo J, Xie W, Dong H, Jia Y. Integrated smart analytics of nucleic acid amplification tests via paper microfluidics and deep learning in cloud computing. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2023]
|
11
|
Lu P, Oetjen KA, Bender DE, Ruzinova MB, Fisher DAC, Shim KG, Pachynski RK, Brennen WN, Oh ST, Link DC, Thorek DLJ. IMC-Denoise: a content aware denoising pipeline to enhance Imaging Mass Cytometry. Nat Commun 2023; 14:1601. [PMID: 36959190 PMCID: PMC10036333 DOI: 10.1038/s41467-023-37123-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 03/02/2023] [Indexed: 03/25/2023] Open
Abstract
Imaging Mass Cytometry (IMC) is an emerging multiplexed imaging technology for analyzing complex microenvironments using more than 40 molecularly-specific channels. However, this modality has unique data processing requirements, particularly for patient tissue specimens where signal-to-noise ratios for markers can be low, despite optimization, and pixel intensity artifacts can deteriorate image quality and downstream analysis. Here we demonstrate an automated content-aware pipeline, IMC-Denoise, to restore IMC images deploying a differential intensity map-based restoration (DIMR) algorithm for removing hot pixels and a self-supervised deep learning algorithm for shot noise image filtering (DeepSNiF). IMC-Denoise outperforms existing methods for adaptive hot pixel and background noise removal, with significant image quality improvement in modeled data and datasets from multiple pathologies. This includes in technically challenging human bone marrow; we achieve noise level reduction of 87% for a 5.6-fold higher contrast-to-noise ratio, and more accurate background noise removal with approximately 2 × improved F1 score. Our approach enhances manual gating and automated phenotyping with cell-scale downstream analyses. Verified by manual annotations, spatial and density analysis for targeted cell groups reveal subtle but significant differences of cell populations in diseased bone marrow. We anticipate that IMC-Denoise will provide similar benefits across mass cytometric applications to more deeply characterize complex tissue microenvironments.
Collapse
Affiliation(s)
- Peng Lu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, USA
- Department of Radiology, Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, USA
- Program in Quantitative Molecular Therapeutics, Washington University School of Medicine, St. Louis, USA
| | - Karolyn A Oetjen
- Department of Medicine, Washington University School of Medicine, St. Louis, USA
| | - Diane E Bender
- The Bursky Center for Human Immunology and Immunotherapy Programs Immunomonitoring Laboratory, Washington University School of Medicine, St. Louis, USA
| | - Marianna B Ruzinova
- Department of Pathology & Immunology, Washington University School of Medicine, St. Louis, USA
| | - Daniel A C Fisher
- Department of Medicine, Washington University School of Medicine, St. Louis, USA
| | - Kevin G Shim
- Department of Medicine, Washington University School of Medicine, St. Louis, USA
| | - Russell K Pachynski
- Department of Medicine, Washington University School of Medicine, St. Louis, USA
| | - W Nathaniel Brennen
- Department of Oncology, Sidney Kimmel Comprehensive Cancer Center (SKCCC), Johns Hopkins University, Baltimore, USA
- Department of Urology, James Buchanan Brady Urological Institute, Johns Hopkins University School of Medicine, Baltimore, USA
| | - Stephen T Oh
- Department of Medicine, Washington University School of Medicine, St. Louis, USA
- The Bursky Center for Human Immunology and Immunotherapy Programs Immunomonitoring Laboratory, Washington University School of Medicine, St. Louis, USA
- Department of Pathology & Immunology, Washington University School of Medicine, St. Louis, USA
| | - Daniel C Link
- Department of Medicine, Washington University School of Medicine, St. Louis, USA
- Department of Pathology & Immunology, Washington University School of Medicine, St. Louis, USA
| | - Daniel L J Thorek
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, USA.
- Department of Radiology, Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, USA.
- Program in Quantitative Molecular Therapeutics, Washington University School of Medicine, St. Louis, USA.
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, USA.
- Oncologic Imaging Program, Siteman Cancer Center, Washington University School of Medicine, St. Louis, USA.
| |
Collapse
|
12
|
Jiao Y, Gu L, Jiang Y, Weng M, Yang M. Digitally predicting protein localization and manipulating protein activity in fluorescence images using 4D reslicing GAN. Bioinformatics 2023; 39:6827288. [PMID: 36373962 PMCID: PMC9805574 DOI: 10.1093/bioinformatics/btac719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 09/28/2022] [Accepted: 11/13/2022] [Indexed: 11/16/2022] Open
Abstract
MOTIVATION While multi-channel fluorescence microscopy is a vital imaging method in biological studies, the number of channels that can be imaged simultaneously is limited by technical and hardware limitations such as emission spectra cross-talk. One solution is using deep neural networks to model the localization relationship between two proteins so that the localization of one protein can be digitally predicted. Furthermore, the input and predicted localization implicitly reflect the modeled relationship. Accordingly, observing the response of the prediction via manipulating input localization could provide an informative way to analyze the modeled relationships between the input and the predicted proteins. RESULTS We propose a protein localization prediction (PLP) method using a cGAN named 4D Reslicing Generative Adversarial Network (4DR-GAN) to digitally generate additional channels. 4DR-GAN models the joint probability distribution of input and output proteins by simultaneously incorporating the protein localization signals in four dimensions including space and time. Because protein localization often correlates with protein activation state, based on accurate PLP, we further propose two novel tools: digital activation (DA) and digital inactivation (DI) to digitally activate and inactivate a protein, in order to observing the response of the predicted protein localization. Compared with genetic approaches, these tools allow precise spatial and temporal control. A comprehensive experiment on six pairs of proteins shows that 4DR-GAN achieves higher-quality PLP than Pix2Pix, and the DA and DI responses are consistent with the known protein functions. The proposed PLP method helps simultaneously visualize additional proteins, and the developed DA and DI tools provide guidance to study localization-based protein functions. AVAILABILITY AND IMPLEMENTATION The open-source code is available at https://github.com/YangJiaoUSA/4DR-GAN. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Yang Jiao
- Department of Electrical and Computer Engineering, University of Nevada, Las Vegas, NV 89154, USA
| | - Lingkun Gu
- School of Life Sciences, University of Nevada, Las Vegas, NV 89154, USA
| | - Yingtao Jiang
- Department of Electrical and Computer Engineering, University of Nevada, Las Vegas, NV 89154, USA
| | - Mo Weng
- To whom correspondence should be addressed. or
| | - Mei Yang
- To whom correspondence should be addressed. or
| |
Collapse
|
13
|
Sun H, Fu X, Abraham S, Jin S, Murphy RF. Improving and evaluating deep learning models of cellular organization. Bioinformatics 2022; 38:5299-5306. [PMID: 36264139 PMCID: PMC9710556 DOI: 10.1093/bioinformatics/btac688] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 10/10/2022] [Accepted: 10/18/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Cells contain dozens of major organelles and thousands of other structures, many of which vary extensively in their number, size, shape and spatial distribution. This complexity and variation dramatically complicates the use of both traditional and deep learning methods to build accurate models of cell organization. Most cellular organelles are distinct objects with defined boundaries that do not overlap, while the pixel resolution of most imaging methods is n sufficient to resolve these boundaries. Thus while cell organization is conceptually object-based, most current methods are pixel-based. Using extensive image collections in which particular organelles were fluorescently labeled, deep learning methods can be used to build conditional autoencoder models for particular organelles. A major advance occurred with the use of a U-net approach to make multiple models all conditional upon a common reference, unlabeled image, allowing the relationships between different organelles to be at least partially inferred. RESULTS We have developed improved Generative Adversarial Networks-based approaches for learning these models and have also developed novel criteria for evaluating how well synthetic cell images reflect the properties of real images. The first set of criteria measure how well models preserve the expected property that organelles do not overlap. We also developed a modified loss function that allows retraining of the models to minimize that overlap. The second set of criteria uses object-based modeling to compare object shape and spatial distribution between synthetic and real images. Our work provides the first demonstration that, at least for some organelles, deep learning models can capture object-level properties of cell images. AVAILABILITY AND IMPLEMENTATION http://murphylab.cbd.cmu.edu/Software/2022_insilico. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
|
14
|
ContransGAN: Convolutional Neural Network Coupling Global Swin-Transformer Network for High-Resolution Quantitative Phase Imaging with Unpaired Data. Cells 2022; 11:cells11152394. [PMID: 35954239 PMCID: PMC9368182 DOI: 10.3390/cells11152394] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 07/31/2022] [Accepted: 07/31/2022] [Indexed: 12/02/2022] Open
Abstract
Optical quantitative phase imaging (QPI) is a frequently used technique to recover biological cells with high contrast in biology and life science for cell detection and analysis. However, the quantitative phase information is difficult to directly obtain with traditional optical microscopy. In addition, there are trade-offs between the parameters of traditional optical microscopes. Generally, a higher resolution results in a smaller field of view (FOV) and narrower depth of field (DOF). To overcome these drawbacks, we report a novel semi-supervised deep learning-based hybrid network framework, termed ContransGAN, which can be used in traditional optical microscopes with different magnifications to obtain high-quality quantitative phase images. This network framework uses a combination of convolutional operation and multiheaded self-attention mechanism to improve feature extraction, and only needs a few unpaired microscopic images to train. The ContransGAN retains the ability of the convolutional neural network (CNN) to extract local features and borrows the ability of the Swin-Transformer network to extract global features. The trained network can output the quantitative phase images, which are similar to those restored by the transport of intensity equation (TIE) under high-power microscopes, according to the amplitude images obtained by low-power microscopes. Biological and abiotic specimens were tested. The experiments show that the proposed deep learning algorithm is suitable for microscopic images with different resolutions and FOVs. Accurate and quick reconstruction of the corresponding high-resolution (HR) phase images from low-resolution (LR) bright-field microscopic intensity images was realized, which were obtained under traditional optical microscopes with different magnifications.
Collapse
|
15
|
Lee S, Kume H, Urakubo H, Kasai H, Ishii S. Tri-view two-photon microscopic image registration and deblurring with convolutional neural networks. Neural Netw 2022; 152:57-69. [DOI: 10.1016/j.neunet.2022.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 02/10/2022] [Accepted: 04/11/2022] [Indexed: 10/18/2022]
|
16
|
Zhou Q, Wen M, Ding M, Zhang X. Unsupervised despeckling of optical coherence tomography images by combining cross-scale CNN with an intra-patch and inter-patch based transformer. OPTICS EXPRESS 2022; 30:18800-18820. [PMID: 36221673 DOI: 10.1364/oe.459477] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 05/03/2022] [Indexed: 06/16/2023]
Abstract
Optical coherence tomography (OCT) has found wide application to the diagnosis of ophthalmic diseases, but the quality of OCT images is degraded by speckle noise. The convolutional neural network (CNN) based methods have attracted much attention in OCT image despeckling. However, these methods generally need noisy-clean image pairs for training and they are difficult to capture the global context information effectively. To address these issues, we have proposed a novel unsupervised despeckling method. This method uses the cross-scale CNN to extract the local features and uses the intra-patch and inter-patch based transformer to extract and merge the local and global feature information. Based on these extracted features, a reconstruction network is used to produce the final denoised result. The proposed network is trained using a hybrid unsupervised loss function, which is defined by the loss produced from Nerighbor2Neighbor, the structural similarity between the despeckled results of the probabilistic non-local means method and our method as well as the mean squared error between their features extracted by the VGG network. Experiments on two clinical OCT image datasets show that our method performs better than several popular despeckling algorithms in terms of visual evaluation and quantitative indexes.
Collapse
|
17
|
A Transformer-Based Network for Deformable Medical Image Registration. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20497-5_41] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
18
|
Liu Y, Ji S. CleftNet: Augmented Deep Learning for Synaptic Cleft Detection From Brain Electron Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3507-3518. [PMID: 34129494 PMCID: PMC8674103 DOI: 10.1109/tmi.2021.3089547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Detecting synaptic clefts is a crucial step to investigate the biological function of synapses. The volume electron microscopy (EM) allows the identification of synaptic clefts by photoing EM images with high resolution and fine details. Machine learning approaches have been employed to automatically predict synaptic clefts from EM images. In this work, we propose a novel and augmented deep learning model, known as CleftNet, for improving synaptic cleft detection from brain EM images. We first propose two novel network components, known as the feature augmentor and the label augmentor, for augmenting features and labels to improve cleft representations. The feature augmentor can fuse global information from inputs and learn common morphological patterns in clefts, leading to augmented cleft features. In addition, it can generate outputs with varying dimensions, making it flexible to be integrated in any deep network. The proposed label augmentor augments the label of each voxel from a value to a vector, which contains both the segmentation label and boundary label. This allows the network to learn important shape information and to produce more informative cleft representations. Based on the proposed feature augmentor and label augmentor, We build the CleftNet as a U-Net like network. The effectiveness of our methods is evaluated on both external and internal tasks. Our CleftNet currently ranks #1 on the external task of the CREMI open challenge. In addition, both quantitative and qualitative results in the internal tasks show that our method outperforms the baseline approaches significantly.
Collapse
|
19
|
Pratapa A, Doron M, Caicedo JC. Image-based cell phenotyping with deep learning. Curr Opin Chem Biol 2021; 65:9-17. [PMID: 34023800 DOI: 10.1016/j.cbpa.2021.04.001] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 04/10/2021] [Indexed: 12/25/2022]
Abstract
A cell's phenotype is the culmination of several cellular processes through a complex network of molecular interactions that ultimately result in a unique morphological signature. Visual cell phenotyping is the characterization and quantification of these observable cellular traits in images. Recently, cellular phenotyping has undergone a massive overhaul in terms of scale, resolution, and throughput, which is attributable to advances across electronic, optical, and chemical technologies for imaging cells. Coupled with the rapid acceleration of deep learning-based computational tools, these advances have opened up new avenues for innovation across a wide variety of high-throughput cell biology applications. Here, we review applications wherein deep learning is powering the recognition, profiling, and prediction of visual phenotypes to answer important biological questions. As the complexity and scale of imaging assays increase, deep learning offers computational solutions to elucidate the details of previously unexplored cellular phenotypes.
Collapse
|