1
|
Stolz BJ, Dhesi J, Bull JA, Harrington HA, Byrne HM, Yoon IHR. Relational Persistent Homology for Multispecies Data with Application to the Tumor Microenvironment. Bull Math Biol 2024; 86:128. [PMID: 39287883 PMCID: PMC11408586 DOI: 10.1007/s11538-024-01353-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 08/29/2024] [Indexed: 09/19/2024]
Abstract
Topological data analysis (TDA) is an active field of mathematics for quantifying shape in complex data. Standard methods in TDA such as persistent homology (PH) are typically focused on the analysis of data consisting of a single entity (e.g., cells or molecular species). However, state-of-the-art data collection techniques now generate exquisitely detailed multispecies data, prompting a need for methods that can examine and quantify the relations among them. Such heterogeneous data types arise in many contexts, ranging from biomedical imaging, geospatial analysis, to species ecology. Here, we propose two methods for encoding spatial relations among different data types that are based on Dowker complexes and Witness complexes. We apply the methods to synthetic multispecies data of a tumor microenvironment and analyze topological features that capture relations between different cell types, e.g., blood vessels, macrophages, tumor cells, and necrotic cells. We demonstrate that relational topological features can extract biological insight, including the dominant immune cell phenotype (an important predictor of patient prognosis) and the parameter regimes of a data-generating model. The methods provide a quantitative perspective on the relational analysis of multispecies spatial data, overcome the limits of traditional PH, and are readily computable.
Collapse
Affiliation(s)
- Bernadette J Stolz
- Laboratory for Topology and Neuroscience, EPFL, Station 8, Lausanne, 1015, Switzerland
- Mathematical Institute, University of Oxford, Andrew Wiles Building, Woodstock Rd, Oxford, OX2 6GG, UK
| | - Jagdeep Dhesi
- Mathematical Institute, University of Oxford, Andrew Wiles Building, Woodstock Rd, Oxford, OX2 6GG, UK
| | - Joshua A Bull
- Mathematical Institute, University of Oxford, Andrew Wiles Building, Woodstock Rd, Oxford, OX2 6GG, UK
| | - Heather A Harrington
- Mathematical Institute, University of Oxford, Andrew Wiles Building, Woodstock Rd, Oxford, OX2 6GG, UK
- Wellcome Centre for Human Genetics, University of Oxford, Roosevelt Dr, Headington, Headington, Oxford, OX3 7BN, UK
| | - Helen M Byrne
- Mathematical Institute, University of Oxford, Andrew Wiles Building, Woodstock Rd, Oxford, OX2 6GG, UK
- Ludwig Institute for Cancer Research, University of Oxford, Old Road Campus Research Build, Roosevelt Dr, Headington, Oxford, OX3 7DQ, UK
| | - Iris H R Yoon
- Mathematical Institute, University of Oxford, Andrew Wiles Building, Woodstock Rd, Oxford, OX2 6GG, UK.
- Department of Mathematics and Computer Science, Wesleyan University, 265 Church Street, Middletown, 06459, USA.
| |
Collapse
|
2
|
Choi YK, Feng L, Jeong WK, Kim J. Connecto-informatics at the mesoscale: current advances in image processing and analysis for mapping the brain connectivity. Brain Inform 2024; 11:15. [PMID: 38833195 DOI: 10.1186/s40708-024-00228-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 05/08/2024] [Indexed: 06/06/2024] Open
Abstract
Mapping neural connections within the brain has been a fundamental goal in neuroscience to understand better its functions and changes that follow aging and diseases. Developments in imaging technology, such as microscopy and labeling tools, have allowed researchers to visualize this connectivity through high-resolution brain-wide imaging. With this, image processing and analysis have become more crucial. However, despite the wealth of neural images generated, access to an integrated image processing and analysis pipeline to process these data is challenging due to scattered information on available tools and methods. To map the neural connections, registration to atlases and feature extraction through segmentation and signal detection are necessary. In this review, our goal is to provide an updated overview of recent advances in these image-processing methods, with a particular focus on fluorescent images of the mouse brain. Our goal is to outline a pathway toward an integrated image-processing pipeline tailored for connecto-informatics. An integrated workflow of these image processing will facilitate researchers' approach to mapping brain connectivity to better understand complex brain networks and their underlying brain functions. By highlighting the image-processing tools available for fluroscent imaging of the mouse brain, this review will contribute to a deeper grasp of connecto-informatics, paving the way for better comprehension of brain connectivity and its implications.
Collapse
Affiliation(s)
- Yoon Kyoung Choi
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | | | - Won-Ki Jeong
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea.
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea.
- KIST-SKKU Brain Research Center, SKKU Institute for Convergence, Sungkyunkwan University, Suwon, South Korea.
| |
Collapse
|
3
|
Tong L, Corrigan A, Kumar NR, Hallbrook K, Orme J, Wang Y, Zhou H. CLANet: A comprehensive framework for cross-batch cell line identification using brightfield images. Med Image Anal 2024; 94:103123. [PMID: 38430651 DOI: 10.1016/j.media.2024.103123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 02/23/2024] [Accepted: 02/25/2024] [Indexed: 03/05/2024]
Abstract
Cell line authentication plays a crucial role in the biomedical field, ensuring researchers work with accurately identified cells. Supervised deep learning has made remarkable strides in cell line identification by studying cell morphological features through cell imaging. However, biological batch (bio-batch) effects, a significant issue stemming from the different times at which data is generated, lead to substantial shifts in the underlying data distribution, thus complicating reliable differentiation between cell lines from distinct batch cultures. To address this challenge, we introduce CLANet, a pioneering framework for cross-batch cell line identification using brightfield images, specifically designed to tackle three distinct bio-batch effects. We propose a cell cluster-level selection method to efficiently capture cell density variations, and a self-supervised learning strategy to manage image quality variations, thus producing reliable patch representations. Additionally, we adopt multiple instance learning(MIL) for effective aggregation of instance-level features for cell line identification. Our innovative time-series segment sampling module further enhances MIL's feature-learning capabilities, mitigating biases from varying incubation times across batches. We validate CLANet using data from 32 cell lines across 93 experimental bio-batches from the AstraZeneca Global Cell Bank. Our results show that CLANet outperforms related approaches (e.g. domain adaptation, MIL), demonstrating its effectiveness in addressing bio-batch effects in cell line identification.
Collapse
Affiliation(s)
- Lei Tong
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, UK; Data Sciences and Quantitative Biology, Discovery Sciences, AstraZeneca R&D, Cambridge, UK
| | - Adam Corrigan
- Data Sciences and Quantitative Biology, Discovery Sciences, AstraZeneca R&D, Cambridge, UK
| | - Navin Rathna Kumar
- UK Cell Culture and Banking, Discovery Sciences, AstraZeneca R&D, Alderley Park, UK
| | - Kerry Hallbrook
- UK Cell Culture and Banking, Discovery Sciences, AstraZeneca R&D, Alderley Park, UK
| | - Jonathan Orme
- UK Cell Culture and Banking, Discovery Sciences, AstraZeneca R&D, Cambridge, UK
| | - Yinhai Wang
- Data Sciences and Quantitative Biology, Discovery Sciences, AstraZeneca R&D, Cambridge, UK.
| | - Huiyu Zhou
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, UK.
| |
Collapse
|
4
|
Li R, Makogon A, Galochkina T, Lemineur JF, Kanoufi F, Shkirskiy V. Unsupervised Analysis of Optical Imaging Data for the Discovery of Reactivity Patterns in Metal Alloy. SMALL METHODS 2023; 7:e2300214. [PMID: 37382395 DOI: 10.1002/smtd.202300214] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 06/08/2023] [Indexed: 06/30/2023]
Abstract
Operando wide-field optical microscopy imaging yields a wealth of information about the reactivity of metal interfaces, yet the data are often unstructured and challenging to process. In this study, the power of unsupervised machine learning (ML) algorithms is harnessed to analyze chemical reactivity images obtained dynamically by reflectivity microscopy in combination with ex situ scanning electron microscopy to identify and cluster the chemical reactivity of particles in Al alloy. The ML analysis uncovers three distinct clusters of reactivity from unlabeled datasets. A detailed examination of representative reactivity patterns confirms the chemical communication of generated OH- fluxes within particles, as supported by statistical analysis of size distribution and finite element modelling (FEM). The ML procedures also reveal statistically significant patterns of reactivity under dynamic conditions, such as pH acidification. The results align well with a numerical model of chemical communication, underscoring the synergy between data-driven ML and physics-driven FEM approaches.
Collapse
Affiliation(s)
- Rui Li
- Université Paris Cité, ITODYS, CNRS, Paris, 75013, France
| | | | | | | | | | | |
Collapse
|
5
|
Piansaddhayanon C, Koracharkornradt C, Laosaengpha N, Tao Q, Ingrungruanglert P, Israsena N, Chuangsuwanich E, Sriswasdi S. Label-free tumor cells classification using deep learning and high-content imaging. Sci Data 2023; 10:570. [PMID: 37634014 PMCID: PMC10460430 DOI: 10.1038/s41597-023-02482-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 08/16/2023] [Indexed: 08/28/2023] Open
Abstract
Many studies have shown that cellular morphology can be used to distinguish spiked-in tumor cells in blood sample background. However, most validation experiments included only homogeneous cell lines and inadequately captured the broad morphological heterogeneity of cancer cells. Furthermore, normal, non-blood cells could be erroneously classified as cancer because their morphology differ from blood cells. Here, we constructed a dataset of microscopic images of organoid-derived cancer and normal cell with diverse morphology and developed a proof-of-concept deep learning model that can distinguish cancer cells from normal cells within an unlabeled microscopy image. In total, more than 75,000 organoid-drived cells from 3 cholangiocarcinoma patients were collected. The model achieved an area under the receiver operating characteristics curve (AUROC) of 0.78 and can generalize to cell images from an unseen patient. These resources serve as a foundation for an automated, robust platform for circulating tumor cell detection.
Collapse
Affiliation(s)
- Chawan Piansaddhayanon
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330, Thailand
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
- Chula Intelligent and Complex Systems, Faculty of Science, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Chonnuttida Koracharkornradt
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Napat Laosaengpha
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330, Thailand
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Qingyi Tao
- NVIDIA AI Technology Center, Singapore, Singapore
| | - Praewphan Ingrungruanglert
- Center of Excellence for Stem Cell and Cell Therapy, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Nipan Israsena
- Center of Excellence for Stem Cell and Cell Therapy, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
- Department of Pharmacology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
| | - Ekapol Chuangsuwanich
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330, Thailand.
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
| | - Sira Sriswasdi
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
- Center for Artificial Intelligence in Medicine, Research Affairs, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
| |
Collapse
|
6
|
Dudaie M, Barnea I, Nissim N, Shaked NT. On-chip label-free cell classification based directly on off-axis holograms and spatial-frequency-invariant deep learning. Sci Rep 2023; 13:12370. [PMID: 37524884 PMCID: PMC10390541 DOI: 10.1038/s41598-023-38160-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 07/04/2023] [Indexed: 08/02/2023] Open
Abstract
We present a rapid label-free imaging flow cytometry and cell classification approach based directly on raw digital holograms. Off-axis holography enables real-time acquisition of cells during rapid flow. However, classification of the cells typically requires reconstruction of their quantitative phase profiles, which is time-consuming. Here, we present a new approach for label-free classification of individual cells based directly on the raw off-axis holographic images, each of which contains the complete complex wavefront (amplitude and quantitative phase profiles) of the cell. To obtain this, we built a convolutional neural network, which is invariant to the spatial frequencies and directions of the interference fringes of the off-axis holograms. We demonstrate the effectiveness of this approach using four types of cancer cells. This approach has the potential to significantly improve both speed and robustness of imaging flow cytometry, enabling real-time label-free classification of individual cells.
Collapse
Affiliation(s)
- Matan Dudaie
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Itay Barnea
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Noga Nissim
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Natan T Shaked
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel.
| |
Collapse
|
7
|
Furuya K, Ikura M, Ikura T. Machine learning extracts oncogenic-specific γ-H2AX foci formation pattern upon genotoxic stress. Genes Cells 2023; 28:237-243. [PMID: 36565298 DOI: 10.1111/gtc.13005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 12/20/2022] [Indexed: 12/25/2022]
Abstract
H2AX is a histone H2A variant that becomes phosphorylated upon genotoxic stress. The phosphorylated H2AX (γ-H2AX) plays an antioncogenic role in the DNA damage response and its foci patterns are highly variable, in terms of intensities and sizes. However, whether characteristic γ-H2AX foci patterns are associated with oncogenesis (oncogenic-specific γ-H2AX foci patterns) remains unknown. We previously reported that a defect in the acetyltransferase activity of TIP60 promotes cancer cell growth in human cell lines. In this study, we compared γ-H2AX foci patterns between TIP60 wild-type cells and TIP60 HAT mutant cells by using machine learning. When focused solely on the intensity and size of γ-H2AX foci, we extracted the TIP60 HAT mutant-like oncogenic-specific γ-H2AX foci pattern among all datasets of γ-H2AX foci patterns. Furthermore, by using the dimensionality reduction method UMAP, we also observed TIP60 HAT mutant-like oncogenic-specific γ-H2AX foci patterns in TIP60 wild-type cells. In summary, we propose the existence of an oncogenic-specific γ-H2AX foci pattern and the importance of a machine learning approach to extract oncogenic signaling among the γ-H2AX foci variations.
Collapse
Affiliation(s)
- Kanji Furuya
- Laboratory of Genome Maintenance, Department of Genome Biology, Radiation Biology Center, Graduate School of Biostudies, Kyoto University, Kyoto, Japan
| | - Masae Ikura
- Laboratory of Chromatin Regulatory Network, Department of Genome Biology, Radiation Biology Center, Graduate School of Biostudies, Kyoto University, Kyoto, Japan
| | - Tsuyoshi Ikura
- Laboratory of Chromatin Regulatory Network, Department of Genome Biology, Radiation Biology Center, Graduate School of Biostudies, Kyoto University, Kyoto, Japan
| |
Collapse
|
8
|
Zinchenko V, Hugger J, Uhlmann V, Arendt D, Kreshuk A. MorphoFeatures for unsupervised exploration of cell types, tissues, and organs in volume electron microscopy. eLife 2023; 12:80918. [PMID: 36795088 PMCID: PMC9934868 DOI: 10.7554/elife.80918] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 01/06/2023] [Indexed: 02/17/2023] Open
Abstract
Electron microscopy (EM) provides a uniquely detailed view of cellular morphology, including organelles and fine subcellular ultrastructure. While the acquisition and (semi-)automatic segmentation of multicellular EM volumes are now becoming routine, large-scale analysis remains severely limited by the lack of generally applicable pipelines for automatic extraction of comprehensive morphological descriptors. Here, we present a novel unsupervised method for learning cellular morphology features directly from 3D EM data: a neural network delivers a representation of cells by shape and ultrastructure. Applied to the full volume of an entire three-segmented worm of the annelid Platynereis dumerilii, it yields a visually consistent grouping of cells supported by specific gene expression profiles. Integration of features across spatial neighbours can retrieve tissues and organs, revealing, for example, a detailed organisation of the animal foregut. We envision that the unbiased nature of the proposed morphological descriptors will enable rapid exploration of very different biological questions in large EM volumes, greatly increasing the impact of these invaluable, but costly resources.
Collapse
Affiliation(s)
- Valentyna Zinchenko
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL)HeidelbergGermany
| | - Johannes Hugger
- European Bioinformatics Institute, European Molecular Biology Laboratory (EMBL)CambridgeUnited Kingdom
| | - Virginie Uhlmann
- European Bioinformatics Institute, European Molecular Biology Laboratory (EMBL)CambridgeUnited Kingdom
| | - Detlev Arendt
- Developmental Biology Unit, European Molecular Biology Laboratory (EMBL)HeidelbergGermany
| | - Anna Kreshuk
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL)HeidelbergGermany
| |
Collapse
|
9
|
Microscopic image-based classification of adipocyte differentiation by machine learning. Histochem Cell Biol 2022; 159:313-327. [PMID: 36504003 DOI: 10.1007/s00418-022-02168-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/17/2022] [Indexed: 12/14/2022]
Abstract
Adipocyte differentiation is a sequential process involving increased expression of peroxisome proliferator-activated receptor gamma (PPARγ), adipocyte-specific gene expression, and accumulation of lipid droplets in the cytoplasm. Expression of the transcription factors involved is usually detected using canonical biochemical or biomolecular procedures such as Western blotting or qPCR of pooled cell lysates. While this provides a useful average index for adipogenesis for some populations, the precise stage of adipogenesis cannot be distinguished at the single-cell level, because the heterogenous nature of differentiation among cells limits the utility of averaged data. We have created a classifier to sort cells, and used it to determine the stage of adipocyte differentiation at the single-cell level. We used a machine learning method with microscopic images of cell stained for PPARγ and lipid droplets as input data. Our results show that the classifier can successfully determine the precise stage of differentiation. Stage classification and subsequent model fitting using the sequential reaction model revealed the action of pioglitazone and rosiglitazone to be promotion of transition from the stage of increased PPARγ expression to the next stage. This indicates that these drugs are PPARγ agonists, and that our classifier and model can accurately estimate drug action points and would be suitable for evaluating the stage/state of individual cells during differentiation or disease progression. The incorporation of both biochemical and morphological information derived from immunofluorescence image of cells and so overcomes limitations of current models.
Collapse
|
10
|
Wei Z, Liu X, Yan R, Sun G, Yu W, Liu Q, Guo Q. Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging. Front Genet 2022; 13:1002327. [PMID: 36386823 PMCID: PMC9644055 DOI: 10.3389/fgene.2022.1002327] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 09/26/2022] [Indexed: 01/25/2023] Open
Abstract
Complex intracellular organizations are commonly represented by dividing the metabolic process of cells into different organelles. Therefore, identifying sub-cellular organelle architecture is significant for understanding intracellular structural properties, specific functions, and biological processes in cells. However, the discrimination of these structures in the natural organizational environment and their functional consequences are not clear. In this article, we propose a new pixel-level multimodal fusion (PLMF) deep network which can be used to predict the location of cellular organelle using label-free cell optical microscopy images followed by deep-learning-based automated image denoising. It provides valuable insights that can be of tremendous help in improving the specificity of label-free cell optical microscopy by using the Transformer-Unet network to predict the ground truth imaging which corresponds to different sub-cellular organelle architectures. The new prediction method proposed in this article combines the advantages of a transformer's global prediction and CNN's local detail analytic ability of background features for label-free cell optical microscopy images, so as to improve the prediction accuracy. Our experimental results showed that the PLMF network can achieve over 0.91 Pearson's correlation coefficient (PCC) correlation between estimated and true fractions on lung cancer cell-imaging datasets. In addition, we applied the PLMF network method on the cell images for label-free prediction of several different subcellular components simultaneously, rather than using several fluorescent labels. These results open up a new way for the time-resolved study of subcellular components in different cells, especially for cancer cells.
Collapse
Affiliation(s)
- Zhihao Wei
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, China
| | - Xi Liu
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, China
| | - Ruiqing Yan
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, China
| | - Guocheng Sun
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, China,School of Mechanical Engineering & Hydrogen Energy Research Centre, Beijing Institute of Petrochemical Technology, Beijing, China
| | - Weiyong Yu
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, China
| | - Qiang Liu
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, China
| | - Qianjin Guo
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, China,School of Mechanical Engineering & Hydrogen Energy Research Centre, Beijing Institute of Petrochemical Technology, Beijing, China,*Correspondence: Qianjin Guo,
| |
Collapse
|
11
|
Multiple Parallel Fusion Network for Predicting Protein Subcellular Localization from Stimulated Raman Scattering (SRS) Microscopy Images in Living Cells. Int J Mol Sci 2022; 23:ijms231810827. [PMID: 36142736 PMCID: PMC9504098 DOI: 10.3390/ijms231810827] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 09/10/2022] [Accepted: 09/13/2022] [Indexed: 11/23/2022] Open
Abstract
Stimulated Raman Scattering Microscopy (SRS) is a powerful tool for label-free detailed recognition and investigation of the cellular and subcellular structures of living cells. Determining subcellular protein localization from the cell level of SRS images is one of the basic goals of cell biology, which can not only provide useful clues for their functions and biological processes but also help to determine the priority and select the appropriate target for drug development. However, the bottleneck in predicting subcellular protein locations of SRS cell imaging lies in modeling complicated relationships concealed beneath the original cell imaging data owing to the spectral overlap information from different protein molecules. In this work, a multiple parallel fusion network, MPFnetwork, is proposed to study the subcellular locations from SRS images. This model used a multiple parallel fusion model to construct feature representations and combined multiple nonlinear decomposing algorithms as the automated subcellular detection method. Our experimental results showed that the MPFnetwork could achieve over 0.93 dice correlation between estimated and true fractions on SRS lung cancer cell datasets. In addition, we applied the MPFnetwork method to cell images for label-free prediction of several different subcellular components simultaneously, rather than using several fluorescent labels. These results open up a new method for the time-resolved study of subcellular components in different cells, especially cancer cells.
Collapse
|
12
|
Wong KS, Zhong X, Low CSL, Kanchanawong P. Self-supervised classification of subcellular morphometric phenotypes reveals extracellular matrix-specific morphological responses. Sci Rep 2022; 12:15329. [PMID: 36097150 PMCID: PMC9468179 DOI: 10.1038/s41598-022-19472-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 08/30/2022] [Indexed: 11/17/2022] Open
Abstract
Cell morphology is profoundly influenced by cellular interactions with microenvironmental factors such as the extracellular matrix (ECM). Upon adhesion to specific ECM, various cell types are known to exhibit different but distinctive morphologies, suggesting that ECM-dependent cell morphological responses may harbour rich information on cellular signalling states. However, the inherent morphological complexity of cellular and subcellular structures has posed an ongoing challenge for automated quantitative analysis. Since multi-channel fluorescence microscopy provides robust molecular specificity important for the biological interpretations of observed cellular architecture, here we develop a deep learning-based analysis pipeline for the classification of cell morphometric phenotypes from multi-channel fluorescence micrographs, termed SE-RNN (residual neural network with squeeze-and-excite blocks). We demonstrate SERNN-based classification of distinct morphological signatures observed when fibroblasts or epithelial cells are presented with different ECM. Our results underscore how cell shapes are non-random and established the framework for classifying cell shapes into distinct morphological signature in a cell-type and ECM-specific manner.
Collapse
Affiliation(s)
- Kin Sun Wong
- Department of Biomedical Engineering, National University of Singapore, Singapore, 117411, Republic of Singapore
| | - Xueying Zhong
- Mechanobiology Institute, National University of Singapore, Singapore, 117411, Republic of Singapore
| | - Christine Siok Lan Low
- Mechanobiology Institute, National University of Singapore, Singapore, 117411, Republic of Singapore
| | - Pakorn Kanchanawong
- Department of Biomedical Engineering, National University of Singapore, Singapore, 117411, Republic of Singapore. .,Mechanobiology Institute, National University of Singapore, Singapore, 117411, Republic of Singapore.
| |
Collapse
|
13
|
Barinda AJ, Arozal W, Yuasa S. A review of pathobiological mechanisms and potential application of medicinal plants for vascular aging: focus on endothelial cell senescence. MEDICAL JOURNAL OF INDONESIA 2022. [DOI: 10.13181/mji.rev.226064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Endothelial cell (EC) senescence plays a pivotal role in aging and is essential for the pathomechanism of aging-related diseases. Drugs targeting cellular senescence, such as senolytic or senomorphic drugs, may prevent aging and age-related diseases, but these bullets remain undeveloped to target EC senescence. Some medicinal plants may have an anti-senescence property but remain undiscovered. Deep learning has become an emerging approach for drug discovery by simply analyzing cellular morphology-based deep learning. This precious tool would be useful for screening the herb candidate in senescent EC rejuvenescence. Of note, several medicinal plants that can be found in Indonesia such as Curcuma longa L., Piper retrofractum, Guazuma ulmifolia Lam, Centella asiatica (L.) Urb., and Garcinia mangostana L. might potentially possess an anti-senescence effect. This review highlighted the importance of targeting EC senescence, the use of deep learning for medicinal plant screening, and some potential anti-senescence plants originating from Indonesia.
Collapse
|
14
|
Watson ER, Taherian Fard A, Mar JC. Computational Methods for Single-Cell Imaging and Omics Data Integration. Front Mol Biosci 2022; 8:768106. [PMID: 35111809 PMCID: PMC8801747 DOI: 10.3389/fmolb.2021.768106] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
Integrating single cell omics and single cell imaging allows for a more effective characterisation of the underlying mechanisms that drive a phenotype at the tissue level, creating a comprehensive profile at the cellular level. Although the use of imaging data is well established in biomedical research, its primary application has been to observe phenotypes at the tissue or organ level, often using medical imaging techniques such as MRI, CT, and PET. These imaging technologies complement omics-based data in biomedical research because they are helpful for identifying associations between genotype and phenotype, along with functional changes occurring at the tissue level. Single cell imaging can act as an intermediary between these levels. Meanwhile new technologies continue to arrive that can be used to interrogate the genome of single cells and its related omics datasets. As these two areas, single cell imaging and single cell omics, each advance independently with the development of novel techniques, the opportunity to integrate these data types becomes more and more attractive. This review outlines some of the technologies and methods currently available for generating, processing, and analysing single-cell omics- and imaging data, and how they could be integrated to further our understanding of complex biological phenomena like ageing. We include an emphasis on machine learning algorithms because of their ability to identify complex patterns in large multidimensional data.
Collapse
Affiliation(s)
| | - Atefeh Taherian Fard
- Australian Institute for Bioengineering and Nanotechnology, The University of Queensland, Brisbane, QLD, Australia
| | - Jessica Cara Mar
- Australian Institute for Bioengineering and Nanotechnology, The University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
15
|
Nguyen P, Chien S, Dai J, Monnat RJ, Becker PS, Kueh HY. Unsupervised discovery of dynamic cell phenotypic states from transmitted light movies. PLoS Comput Biol 2021; 17:e1009626. [PMID: 34968384 PMCID: PMC8754342 DOI: 10.1371/journal.pcbi.1009626] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 01/12/2022] [Accepted: 11/09/2021] [Indexed: 11/26/2022] Open
Abstract
Identification of cell phenotypic states within heterogeneous populations, along with elucidation of their switching dynamics, is a central challenge in modern biology. Conventional single-cell analysis methods typically provide only indirect, static phenotypic readouts. Transmitted light images, on the other hand, provide direct morphological readouts and can be acquired over time to provide a rich data source for dynamic cell phenotypic state identification. Here, we describe an end-to-end deep learning platform, UPSIDE (Unsupervised Phenotypic State IDEntification), for discovering cell states and their dynamics from transmitted light movies. UPSIDE uses the variational auto-encoder architecture to learn latent cell representations, which are then clustered for state identification, decoded for feature interpretation, and linked across movie frames for transition rate inference. Using UPSIDE, we identified distinct blood cell types in a heterogeneous dataset. We then analyzed movies of patient-derived acute myeloid leukemia cells, from which we identified stem-cell associated morphological states as well as the transition rates to and from these states. UPSIDE opens up the use of transmitted light movies for systematic exploration of cell state heterogeneity and dynamics in biology and medicine.
Collapse
Affiliation(s)
- Phuc Nguyen
- Department of Bioengineering, University of Washington, Seattle, Washington, United States of America
- Molecular Engineering and Sciences Institute, University of Washington, Seattle, Washington, United States of America
| | - Sylvia Chien
- Division of Hematology, University of Washington, Seattle, Washington, United States of America
| | - Jin Dai
- Division of Hematology, University of Washington, Seattle, Washington, United States of America
| | - Raymond J. Monnat
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, Washington, United States of America
- Department of Genome Sciences, University of Washington, Seattle, Washington, United States of America
- Institute for Stem Cell and Regenerative Medicine, University of Washington, Seattle, Washington, United States of America
| | - Pamela S. Becker
- Division of Hematology, University of Washington, Seattle, Washington, United States of America
- Clinical Research Division, Fred Hutchinson Cancer Research Center, Seattle, Washington, United States of America
- Division of Hematology/Oncology, Department of Medicine, University of California, Irvine, California, United States of America
- Chao Family Comprehensive Cancer Center Cancer Research Institute, University of California, Irvine, California, United States of America
| | - Hao Yuan Kueh
- Department of Bioengineering, University of Washington, Seattle, Washington, United States of America
- Molecular Engineering and Sciences Institute, University of Washington, Seattle, Washington, United States of America
- Institute for Stem Cell and Regenerative Medicine, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
16
|
Theissen H, Chakraborti T, Malacrino S, Sirinukunwattana K, Royston D, Rittscher J. Learning Cellular Phenotypes through Supervision. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3592-3595. [PMID: 34892015 DOI: 10.1109/embc46164.2021.9629898] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Image-based cell phenotyping is an important and open problem in computational pathology. The two principal challenges are: 1) making the cell cluster properties insensitive to experimental settings (like seed point and feature selection) and 2) ensuring that the phenotypes emerging are biologically relevant and support clinical reporting. To gauge robustness, we first compare the consistency of the phenotypes using self-supervised and supervised features. Through case classification, we analyse the relevance of the self-supervised and supervised feature sets with respect to the clinical diagnosis. In addition, we demonstrate how we can add model explainability through Shapley values to identify more disease relevant cellular phenotypes and measure their importance in context of the disease. Here, myeloproliferative neoplasms, a haematopoietic stem cell disorder, where one particular cell type is of diagnostic relevance is used as an exemplar. The experiments conducted on a set of bone marrow trephines demonstrate an improvement of 7.4 % in accuracy for case classification using cellular phenotypes derived from the supervised scenario.
Collapse
|
17
|
Qiao Y, Zhang Y, Liu N, Chen P, Liu Y. An End-to-End Pipeline for Early Diagnosis of Acute Promyelocytic Leukemia Based on a Compact CNN Model. Diagnostics (Basel) 2021; 11:diagnostics11071237. [PMID: 34359320 PMCID: PMC8304210 DOI: 10.3390/diagnostics11071237] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 07/06/2021] [Accepted: 07/06/2021] [Indexed: 01/31/2023] Open
Abstract
Timely microscopy screening of peripheral blood smears is essential for the diagnosis of acute promyelocytic leukemia (APL) due to the occurrence of early death (ED) before or during the initial therapy. Screening manually is time-consuming and tedious, and may lead to missed diagnosis or misdiagnosis because of subjective bias. To address these problems, we develop a three-step pipeline to help in the early diagnosis of APL from peripheral blood smears. The entire pipeline consists of leukocytes focusing, cell classification and diagnostic opinions. As the key component of the pipeline, a compact classification model based on attention embedded convolutional neural network blocks is proposed to distinguish promyelocytes from normal leukocytes. The compact classification model is validated on both the combination of two public datasets, APL-Cytomorphology_LMU and APL-Cytomorphology_JHH, as well as the clinical dataset, to yield a precision of 96.53% and 99.20%, respectively. The results indicate that our model outperforms the other evaluated popular classification models owing to its better accuracy and smaller size. Furthermore, the entire pipeline is validated on realistic patient data. The proposed method promises to act as an assistant tool for APL diagnosis.
Collapse
Affiliation(s)
- Yifan Qiao
- The College of Computer Science, Sichuan University, Chengdu 610065, China; (Y.Q.); (Y.Z.)
| | - Yi Zhang
- The College of Computer Science, Sichuan University, Chengdu 610065, China; (Y.Q.); (Y.Z.)
| | - Nian Liu
- The College of Electrical Engineering, Sichuan University, Chengdu 610065, China;
| | - Pu Chen
- The Department of Laboratory Medicine, Zhongshan Hospital, Fudan University, Shanghai 200032, China
- Correspondence: (P.C.); (Y.L.); Tel.: +86-021-64041990 (ext. 2435) (P.C.); +86-028-85120790 (Y.L.)
| | - Yan Liu
- The College of Electrical Engineering, Sichuan University, Chengdu 610065, China;
- Correspondence: (P.C.); (Y.L.); Tel.: +86-021-64041990 (ext. 2435) (P.C.); +86-028-85120790 (Y.L.)
| |
Collapse
|
18
|
Pérez-Aliacar M, Doweidar MH, Doblaré M, Ayensa-Jiménez J. Predicting cell behaviour parameters from glioblastoma on a chip images. A deep learning approach. Comput Biol Med 2021; 135:104547. [PMID: 34139437 DOI: 10.1016/j.compbiomed.2021.104547] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 05/28/2021] [Accepted: 05/31/2021] [Indexed: 12/21/2022]
Abstract
The broad possibilities offered by microfluidic devices in relation to massive data monitoring and acquisition open the door to the use of deep learning technologies in a very promising field: cell culture monitoring. In this work, we develop a methodology for parameter identification in cell culture from fluorescence images using Convolutional Neural Networks (CNN). We apply this methodology to the in vitro study of glioblastoma (GBM), the most common, aggressive and lethal primary brain tumour. In particular, the aim is to predict the three parameters defining the go or grow GBM behaviour, which is determinant for the tumour prognosis and response to treatment. The data used to train the network are obtained from a mathematical model, previously validated with in vitro experimental results. The resulting CNN provides remarkably accurate predictions (Pearson's ρ > 0.99 for all the parameters). Besides, it proves to be sound, to filter noise and to generalise. After training and validation with synthetic data, we predict the parameters corresponding to a real image of a microfluidic experiment. The obtained results show good performance of the CNN. The proposed technique may set the first steps towards patient-specific tools, able to predict in real-time the tumour evolution for each particular patient, thanks to a combined in vitro-in silico approach.
Collapse
Affiliation(s)
- Marina Pérez-Aliacar
- Aragon Institute of Engineering Research (I3A), University of Zaragoza, Mariano Esquillor S/N, Zaragoza, Spain; Mechanical Engineering Department, University of Zaragoza, María de Luna S/N, Zaragoza, Spain.
| | - Mohamed H Doweidar
- Aragon Institute of Engineering Research (I3A), University of Zaragoza, Mariano Esquillor S/N, Zaragoza, Spain; Mechanical Engineering Department, University of Zaragoza, María de Luna S/N, Zaragoza, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Monforte de Lemos 3-5, Pabellón 11. Planta 0, Madrid, Spain.
| | - Manuel Doblaré
- Aragon Institute of Engineering Research (I3A), University of Zaragoza, Mariano Esquillor S/N, Zaragoza, Spain; Aragon Institute of Health Research (IIS Aragón), University of Zaragoza, San Juan Bosco 13, Zaragoza, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Monforte de Lemos 3-5, Pabellón 11. Planta 0, Madrid, Spain.
| | - Jacobo Ayensa-Jiménez
- Aragon Institute of Engineering Research (I3A), University of Zaragoza, Mariano Esquillor S/N, Zaragoza, Spain; Mechanical Engineering Department, University of Zaragoza, María de Luna S/N, Zaragoza, Spain; Aragon Institute of Health Research (IIS Aragón), University of Zaragoza, San Juan Bosco 13, Zaragoza, Spain.
| |
Collapse
|
19
|
Liu Z, Jin L, Chen J, Fang Q, Ablameyko S, Yin Z, Xu Y. A survey on applications of deep learning in microscopy image analysis. Comput Biol Med 2021; 134:104523. [PMID: 34091383 DOI: 10.1016/j.compbiomed.2021.104523] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 05/13/2021] [Accepted: 05/17/2021] [Indexed: 01/12/2023]
Abstract
Advanced microscopy enables us to acquire quantities of time-lapse images to visualize the dynamic characteristics of tissues, cells or molecules. Microscopy images typically vary in signal-to-noise ratios and include a wealth of information which require multiple parameters and time-consuming iterative algorithms for processing. Precise analysis and statistical quantification are often needed for the understanding of the biological mechanisms underlying these dynamic image sequences, which has become a big challenge in the field. As deep learning technologies develop quickly, they have been applied in bioimage processing more and more frequently. Novel deep learning models based on convolution neural networks have been developed and illustrated to achieve inspiring outcomes. This review article introduces the applications of deep learning algorithms in microscopy image analysis, which include image classification, region segmentation, object tracking and super-resolution reconstruction. We also discuss the drawbacks of existing deep learning-based methods, especially on the challenges of training datasets acquisition and evaluation, and propose the potential solutions. Furthermore, the latest development of augmented intelligent microscopy that based on deep learning technology may lead to revolution in biomedical research.
Collapse
Affiliation(s)
- Zhichao Liu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China
| | - Luhong Jin
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China
| | - Jincheng Chen
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China
| | - Qiuyu Fang
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China
| | - Sergey Ablameyko
- National Academy of Sciences, United Institute of Informatics Problems, Belarusian State University, Minsk, 220012, Belarus
| | - Zhaozheng Yin
- AI Institute, Department of Biomedical Informatics and Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Yingke Xu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Department of Endocrinology, The Affiliated Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, 310016, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China.
| |
Collapse
|
20
|
Simionato G, Hinkelmann K, Chachanidze R, Bianchi P, Fermo E, van Wijk R, Leonetti M, Wagner C, Kaestner L, Quint S. Red blood cell phenotyping from 3D confocal images using artificial neural networks. PLoS Comput Biol 2021; 17:e1008934. [PMID: 33983926 PMCID: PMC8118337 DOI: 10.1371/journal.pcbi.1008934] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Accepted: 04/01/2021] [Indexed: 12/15/2022] Open
Abstract
The investigation of cell shapes mostly relies on the manual classification of 2D images, causing a subjective and time consuming evaluation based on a portion of the cell surface. We present a dual-stage neural network architecture for analyzing fine shape details from confocal microscopy recordings in 3D. The system, tested on red blood cells, uses training data from both healthy donors and patients with a congenital blood disease, namely hereditary spherocytosis. Characteristic shape features are revealed from the spherical harmonics spectrum of each cell and are automatically processed to create a reproducible and unbiased shape recognition and classification. The results show the relation between the particular genetic mutation causing the disease and the shape profile. With the obtained 3D phenotypes, we suggest our method for diagnostics and theragnostics of blood diseases. Besides the application employed in this study, our algorithms can be easily adapted for the 3D shape phenotyping of other cell types and extend their use to other applications, such as industrial automated 3D quality control.
Collapse
Affiliation(s)
- Greta Simionato
- Department of Experimental Physics, Saarland University, Campus E2.6, Saarbrücken, Germany
- Institute for Clinical and Experimental Surgery, Saarland University, Campus University Hospital, Homburg, Germany
| | - Konrad Hinkelmann
- Department of Experimental Physics, Saarland University, Campus E2.6, Saarbrücken, Germany
| | - Revaz Chachanidze
- Department of Experimental Physics, Saarland University, Campus E2.6, Saarbrücken, Germany
- CNRS, University Grenoble Alpes, Grenoble INP, LRP, Grenoble, France
| | - Paola Bianchi
- Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, Milano, Italy
| | - Elisa Fermo
- Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, Milano, Italy
| | - Richard van Wijk
- Department of Clinical Chemistry & Haematology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Marc Leonetti
- CNRS, University Grenoble Alpes, Grenoble INP, LRP, Grenoble, France
| | - Christian Wagner
- Department of Experimental Physics, Saarland University, Campus E2.6, Saarbrücken, Germany
- Physics and Materials Science Research Unit, University of Luxembourg, Luxembourg City, Luxembourg
| | - Lars Kaestner
- Department of Experimental Physics, Saarland University, Campus E2.6, Saarbrücken, Germany
- Theoretical Medicine and Biosciences, Saarland University, Campus University Hospital, Homburg, Germany
| | - Stephan Quint
- Department of Experimental Physics, Saarland University, Campus E2.6, Saarbrücken, Germany
- Cysmic GmbH, Saarland University, Saarbrücken, Germany
- * E-mail:
| |
Collapse
|
21
|
Sanicola HW, Stewart CE, Mueller M, Ahmadi F, Wang D, Powell SK, Sarkar K, Cutbush K, Woodruff MA, Brafman DA. Guidelines for establishing a 3-D printing biofabrication laboratory. Biotechnol Adv 2020; 45:107652. [PMID: 33122013 DOI: 10.1016/j.biotechadv.2020.107652] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 10/21/2020] [Accepted: 10/23/2020] [Indexed: 12/23/2022]
Abstract
Advanced manufacturing and 3D printing are transformative technologies currently undergoing rapid adoption in healthcare, a traditionally non-manufacturing sector. Recent development in this field, largely enabled by merging different disciplines, has led to important clinical applications from anatomical models to regenerative bioscaffolding and devices. Although much research to-date has focussed on materials, designs, processes, and products, little attention has been given to the design and requirements of facilities for enabling clinically relevant biofabrication solutions. These facilities are critical to overcoming the major hurdles to clinical translation, including solving important issues such as reproducibility, quality control, regulations, and commercialization. To improve process uniformity and ensure consistent development and production, large-scale manufacturing of engineered tissues and organs will require standardized facilities, equipment, qualification processes, automation, and information systems. This review presents current and forward-thinking guidelines to help design biofabrication laboratories engaged in engineering model and tissue constructs for therapeutic and non-therapeutic applications.
Collapse
Affiliation(s)
- Henry W Sanicola
- Faculty of Medicine, The University of Queensland, Brisbane 4006, Australia
| | - Caleb E Stewart
- Department of Neurosurgery, Louisiana State Health Sciences Center, Shreveport, LA 71103, USA.
| | | | - Farzad Ahmadi
- Department of Electrical and Computer Engineering, Youngstown State University, Youngstown, OH 44555, USA
| | - Dadong Wang
- Quantitative Imaging Research Team, Data61, Commonwealth Scientific and Industrial Research Organization, Marsfield, NSW 2122, Australia
| | - Sean K Powell
- Science and Engineering Faculty, Queensland University of Technology, Brisbane 4029, Australia
| | - Korak Sarkar
- M3D Laboratory, Ochsner Health System, New Orleans, LA 70121, USA
| | - Kenneth Cutbush
- Faculty of Medicine, The University of Queensland, Brisbane 4006, Australia
| | - Maria A Woodruff
- Science and Engineering Faculty, Queensland University of Technology, Brisbane 4029, Australia.
| | - David A Brafman
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ 85287, USA.
| |
Collapse
|
22
|
Yao K, Rochman ND, Sun SX. CTRL - a label-free artificial intelligence method for dynamic measurement of single-cell volume. J Cell Sci 2020; 133:jcs.245050. [PMID: 32094267 DOI: 10.1242/jcs.245050] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 02/10/2020] [Indexed: 12/27/2022] Open
Abstract
Measuring the physical size of a cell is valuable in understanding cell growth control. Current single-cell volume measurement methods for mammalian cells are labor intensive, inflexible and can cause cell damage. We introduce CTRL: Cell Topography Reconstruction Learner, a label-free technique incorporating the deep learning algorithm and the fluorescence exclusion method for reconstructing cell topography and estimating mammalian cell volume from differential interference contrast (DIC) microscopy images alone. The method achieves quantitative accuracy, requires minimal sample preparation, and applies to a wide range of biological and experimental conditions. The method can be used to track single-cell volume dynamics over arbitrarily long time periods. For HT1080 fibrosarcoma cells, we observe that the cell size at division is positively correlated with the cell size at birth (sizer), and there is a noticeable reduction in cell size fluctuations at 25% completion of the cell cycle in HT1080 fibrosarcoma cells.
Collapse
Affiliation(s)
- Kai Yao
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.,Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Nash D Rochman
- Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD 21218, USA.,Department of Chemical and Biomolecular Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Sean X Sun
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA .,Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD 21218, USA.,Physical Sciences in Oncology Center (PSOC), Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
23
|
Soetje B, Fuellekrug J, Haffner D, Ziegler WH. Application and Comparison of Supervised Learning Strategies to Classify Polarity of Epithelial Cell Spheroids in 3D Culture. Front Genet 2020; 11:248. [PMID: 32292417 PMCID: PMC7119422 DOI: 10.3389/fgene.2020.00248] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 03/02/2020] [Indexed: 12/16/2022] Open
Abstract
Three-dimensional culture systems that allow generation of monolayered epithelial cell spheroids are widely used to study epithelial function in vitro. Epithelial spheroid formation is applied to address cellular consequences of (mono)-genetic disorders, that is, ciliopathies, in toxicity testing, or to develop treatment options aimed to restore proper epithelial cell characteristics and function. With the potential of a high-throughput method, the main obstacle to efficient application of the spheroid formation assay so far is the laborious, time-consuming, and bias-prone analysis of spheroid images by individuals. Hundredths of multidimensional fluorescence images are blinded, rated by three persons, and subsequently, differences in ratings are compared and discussed. Here, we apply supervised learning and compare strategies based on machine learning versus deep learning. While deep learning approaches can directly process raw image data, machine learning requires transformed data of features extracted from fluorescence images. We verify the accuracy of both strategies on a validation data set, analyse an experimental data set, and observe that different strategies can be very accurate. Deep learning, however, is less sensitive to overfitting and experimental batch-to-batch variations, thus providing a rather powerful and easily adjustable classification tool.
Collapse
Affiliation(s)
- Birga Soetje
- Department of Paediatric Kidney, Liver and Metabolic Diseases, Hannover Medical School, Hanover, Germany
| | - Joachim Fuellekrug
- Molecular Cell Biology Laboratory, Internal Medicine IV, University Hospital Heidelberg, Heidelberg, Germany
| | - Dieter Haffner
- Department of Paediatric Kidney, Liver and Metabolic Diseases, Hannover Medical School, Hanover, Germany
| | - Wolfgang H. Ziegler
- Department of Paediatric Kidney, Liver and Metabolic Diseases, Hannover Medical School, Hanover, Germany
| |
Collapse
|
24
|
Sun J, Tárnok A, Su X. Deep Learning-Based Single-Cell Optical Image Studies. Cytometry A 2020; 97:226-240. [PMID: 31981309 DOI: 10.1002/cyto.a.23973] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Revised: 01/03/2020] [Accepted: 01/10/2020] [Indexed: 12/17/2022]
Abstract
Optical imaging technology that has the advantages of high sensitivity and cost-effectiveness greatly promotes the progress of nondestructive single-cell studies. Complex cellular image analysis tasks such as three-dimensional reconstruction call for machine-learning technology in cell optical image research. With the rapid developments of high-throughput imaging flow cytometry, big data cell optical images are always obtained that may require machine learning for data analysis. In recent years, deep learning has been prevalent in the field of machine learning for large-scale image processing and analysis, which brings a new dawn for single-cell optical image studies with an explosive growth of data availability. Popular deep learning techniques offer new ideas for multimodal and multitask single-cell optical image research. This article provides an overview of the basic knowledge of deep learning and its applications in single-cell optical image studies. We explore the feasibility of applying deep learning techniques to single-cell optical image analysis, where popular techniques such as transfer learning, multimodal learning, multitask learning, and end-to-end learning have been reviewed. Image preprocessing and deep learning model training methods are then summarized. Applications based on deep learning techniques in the field of single-cell optical image studies are reviewed, which include image segmentation, super-resolution image reconstruction, cell tracking, cell counting, cross-modal image reconstruction, and design and control of cell imaging systems. In addition, deep learning in popular single-cell optical imaging techniques such as label-free cell optical imaging, high-content screening, and high-throughput optical imaging cytometry are also mentioned. Finally, the perspectives of deep learning technology for single-cell optical image analysis are discussed. © 2020 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Jing Sun
- Institute of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, 250061, China
| | - Attila Tárnok
- Department of Therapy Validation, Fraunhofer Institute for Cell Therapy and Immunology (IZI), Leipzig, Germany.,Institute for Medical Informatics, Statistics and Epidemiology (IMISE), University of Leipzig, Leipzig, Germany
| | - Xuantao Su
- Institute of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, 250061, China
| |
Collapse
|