1
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|
2
|
Antonelli L, Polverino F, Albu A, Hada A, Asteriti IA, Degrassi F, Guarguaglini G, Maddalena L, Guarracino MR. ALFI: Cell cycle phenotype annotations of label-free time-lapse imaging data from cultured human cells. Sci Data 2023; 10:677. [PMID: 37794110 PMCID: PMC10551030 DOI: 10.1038/s41597-023-02540-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 09/05/2023] [Indexed: 10/06/2023] Open
Abstract
Detecting and tracking multiple moving objects in a video is a challenging task. For living cells, the task becomes even more arduous as cells change their morphology over time, can partially overlap, and mitosis leads to new cells. Differently from fluorescence microscopy, label-free techniques can be easily applied to almost all cell lines, reducing sample preparation complexity and phototoxicity. In this study, we present ALFI, a dataset of images and annotations for label-free microscopy, made publicly available to the scientific community, that notably extends the current panorama of expertly labeled data for detection and tracking of cultured living nontransformed and cancer human cells. It consists of 29 time-lapse image sequences from HeLa, U2OS, and hTERT RPE-1 cells under different experimental conditions, acquired by differential interference contrast microscopy, for a total of 237.9 hours. It contains various annotations (pixel-wise segmentation masks, object-wise bounding boxes, tracking information). The dataset is useful for testing and comparing methods for identifying interphase and mitotic events and reconstructing their lineage, and for discriminating different cellular phenotypes.
Collapse
Affiliation(s)
- Laura Antonelli
- ICAR, Institute for High-Performance Computing and Networking, National Research Council, Naples, Italy
| | - Federica Polverino
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy
| | - Alexandra Albu
- Department of Economics and Law, University of Cassino and Southern Lazio, Cassino, Italy
| | - Aroj Hada
- Department of Economics and Law, University of Cassino and Southern Lazio, Cassino, Italy
| | - Italia A Asteriti
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy
| | - Francesca Degrassi
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy
| | - Giulia Guarguaglini
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy.
| | - Lucia Maddalena
- ICAR, Institute for High-Performance Computing and Networking, National Research Council, Naples, Italy.
| | - Mario R Guarracino
- Department of Economics and Law, University of Cassino and Southern Lazio, Cassino, Italy
- Laboratory of Algorithms and Technologies for Networks Analysis, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
3
|
Zhang H, Nguyen DH, Tsuda K. Differentiable optimization layers enhance GNN-based mitosis detection. Sci Rep 2023; 13:14306. [PMID: 37653108 PMCID: PMC10471751 DOI: 10.1038/s41598-023-41562-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 08/28/2023] [Indexed: 09/02/2023] Open
Abstract
Automatic mitosis detection from video is an essential step in analyzing proliferative behaviour of cells. In existing studies, a conventional object detector such as Unet is combined with a link prediction algorithm to find correspondences between parent and daughter cells. However, they do not take into account the biological constraint that a cell in a frame can correspond to up to two cells in the next frame. Our model called GNN-DOL enables mitosis detection by complementing a graph neural network (GNN) with a differentiable optimization layer (DOL) that implements the constraint. In time-lapse microscopy sequences cultured under four different conditions, we observed that the layer substantially improved detection performance in comparison with GNN-based link prediction. Our results illustrate the importance of incorporating biological knowledge explicitly into deep learning models.
Collapse
Affiliation(s)
- Haishan Zhang
- Department of Computational Biology and Medical Sciences, Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa, Chiba, 277-8561, Japan
| | - Dai Hai Nguyen
- Department of Computer Science, The University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8577, Japan
| | - Koji Tsuda
- Department of Computational Biology and Medical Sciences, Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa, Chiba, 277-8561, Japan.
- RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027, Japan.
- Research and Services Division of Materials Data and Integrated System, National Institute for Materials Science, Tsukuba, Ibaraki, 305-0047, Japan.
| |
Collapse
|
4
|
Zargari A, Lodewijk GA, Mashhadi N, Cook N, Neudorf CW, Araghbidikashani K, Hays R, Kozuki S, Rubio S, Hrabeta-Robinson E, Brooks A, Hinck L, Shariati SA. DeepSea is an efficient deep-learning model for single-cell segmentation and tracking in time-lapse microscopy. CELL REPORTS METHODS 2023; 3:100500. [PMID: 37426758 PMCID: PMC10326378 DOI: 10.1016/j.crmeth.2023.100500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 02/01/2023] [Accepted: 05/17/2023] [Indexed: 07/11/2023]
Abstract
Time-lapse microscopy is the only method that can directly capture the dynamics and heterogeneity of fundamental cellular processes at the single-cell level with high temporal resolution. Successful application of single-cell time-lapse microscopy requires automated segmentation and tracking of hundreds of individual cells over several time points. However, segmentation and tracking of single cells remain challenging for the analysis of time-lapse microscopy images, in particular for widely available and non-toxic imaging modalities such as phase-contrast imaging. This work presents a versatile and trainable deep-learning model, termed DeepSea, that allows for both segmentation and tracking of single cells in sequences of phase-contrast live microscopy images with higher precision than existing models. We showcase the application of DeepSea by analyzing cell size regulation in embryonic stem cells.
Collapse
Affiliation(s)
- Abolfazl Zargari
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Gerrald A. Lodewijk
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Najmeh Mashhadi
- Department of Computer Science and Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Nathan Cook
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Celine W. Neudorf
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | | | - Robert Hays
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Sayaka Kozuki
- Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for the Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Stefany Rubio
- Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for the Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Eva Hrabeta-Robinson
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Angela Brooks
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Lindsay Hinck
- Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for the Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - S. Ali Shariati
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for the Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
| |
Collapse
|
5
|
Han L, Su H, Yin Z. Phase Contrast Image Restoration by Formulating Its Imaging Principle and Reversing the Formulation With Deep Neural Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1068-1082. [PMID: 36409800 DOI: 10.1109/tmi.2022.3223677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Phase contrast microscopy, as a noninvasive imaging technique, has been widely used to monitor the behavior of transparent cells without staining or altering them. Due to the optical principle of the specifically-designed microscope, phase contrast microscopy images contain artifacts such as halo and shade-off which hinder the cell segmentation and detection tasks. Some previous works developed simplified computational imaging models for phase contrast microscopes by linear approximations and convolutions. The approximated models do not exactly reflect the imaging principle of the phase contrast microscope and accordingly the image restoration by solving the corresponding deconvolution process is not perfect. In this paper, we revisit the optical principle of the phase contrast microscope to precisely formulate its imaging model without any approximation. Based on this model, we propose an image restoration procedure by reversing this imaging model with a deep neural network, instead of mathematically deriving the inverse operator of the model which is technically impossible. Extensive experiments are conducted to demonstrate the superiority of the newly derived phase contrast microscopy imaging model and the power of the deep neural network on modeling the inverse imaging procedure. Moreover, the restored images enable that high quality cell segmentation task can be easily achieved by simply thresholding methods. Implementations of this work are publicly available at https://github.com/LiangHann/Phase-Contrast-Microscopy-Image-Restoration.
Collapse
|
6
|
Fukai YT, Kawaguchi K. LapTrack: linear assignment particle tracking with tunable metrics. Bioinformatics 2022; 39:6887138. [PMID: 36495181 PMCID: PMC9825786 DOI: 10.1093/bioinformatics/btac799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 11/09/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
MOTIVATION Particle tracking is an important step of analysis in a variety of scientific fields and is particularly indispensable for the construction of cellular lineages from live images. Although various supervised machine learning methods have been developed for cell tracking, the diversity of the data still necessitates heuristic methods that require parameter estimations from small amounts of data. For this, solving tracking as a linear assignment problem (LAP) has been widely applied and demonstrated to be efficient. However, there has been no implementation that allows custom connection costs, parallel parameter tuning with ground truth annotations, and the functionality to preserve ground truth connections, limiting the application to datasets with partial annotations. RESULTS We developed LapTrack, a LAP-based tracker which allows including arbitrary cost functions and inputs, parallel parameter tuning and ground-truth track preservation. Analysis of real and artificial datasets demonstrates the advantage of custom metric functions for tracking score improvement from distance-only cases. The tracker can be easily combined with other Python-based tools for particle detection, segmentation and visualization. AVAILABILITY AND IMPLEMENTATION LapTrack is available as a Python package on PyPi, and the notebook examples are shared at https://github.com/yfukai/laptrack. The data and code for this publication are hosted at https://github.com/NoneqPhysLivingMatterLab/laptrack-optimisation. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
| | - Kyogo Kawaguchi
- Nonequilibrium Physics of Living Matter RIKEN Hakubi Research Team, RIKEN Center for Biosystems Dynamics Research, Kobe 650-0047, Japan,RIKEN Cluster for Pioneering Research, Kobe 650-0047, Japan,Universal Biology Institute, The University of Tokyo, Tokyo 113-0033, Japan
| |
Collapse
|
7
|
Cho H, Nishimura K, Watanabe K, Bise R. Effective pseudo-labeling based on heatmap for unsupervised domain adaptation in cell detection. Med Image Anal 2022; 79:102436. [DOI: 10.1016/j.media.2022.102436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 03/24/2022] [Accepted: 03/25/2022] [Indexed: 11/29/2022]
|
8
|
Nishimura K, Wang C, Watanabe K, Fei Elmer Ker D, Bise R. Weakly supervised cell instance segmentation under various conditions. Med Image Anal 2021; 73:102182. [PMID: 34340103 DOI: 10.1016/j.media.2021.102182] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 07/14/2021] [Accepted: 07/14/2021] [Indexed: 10/20/2022]
Abstract
Cell instance segmentation is important in biomedical research. For living cell analysis, microscopy images are captured under various conditions (e.g., the type of microscopy and type of cell). Deep-learning-based methods can be used to perform instance segmentation if sufficient annotations of individual cell boundaries are prepared as training data. Generally, annotations are required for each condition, which is very time-consuming and labor-intensive. To reduce the annotation cost, we propose a weakly supervised cell instance segmentation method that can segment individual cell regions under various conditions by only using rough cell centroid positions as training data. This method dramatically reduces the annotation cost compared with the standard annotation method of supervised segmentation. We demonstrated the efficacy of our method on various cell images; it outperformed several of the conventional weakly-supervised methods on average. In addition, we demonstrated that our method can perform instance cell segmentation without any manual annotation by using pairs of phase contrast and fluorescence images in which cell nuclei are stained as training data.
Collapse
Affiliation(s)
- Kazuya Nishimura
- Department of Advanced Information Technology, Kyushu University, Fukuoka, Japan.
| | - Chenyang Wang
- Institute for Tissue Engineering and Regenerative Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong SAR
| | | | - Dai Fei Elmer Ker
- Institute for Tissue Engineering and Regenerative Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong SAR; School of Biomedical Sciences, Faculty of Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong SAR; Key Laboratory for Regenerative Medicine, Ministry of Education, School of Biomedical Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR; Department of Orthopaedics and Traumatology, Prince of Wales Hospital, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR
| | - Ryoma Bise
- Department of Advanced Information Technology, Kyushu University, Fukuoka, Japan.
| |
Collapse
|
9
|
Su YT, Lu Y, Liu J, Chen M, Liu AA. Spatio-Temporal Mitosis Detection in Time-Lapse Phase-Contrast Microscopy Image Sequences: A Benchmark. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1319-1328. [PMID: 33465026 DOI: 10.1109/tmi.2021.3052854] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this paper, we report the results of the first international contest on mitosis detection in phase-contrast microscopy image sequences (https://www.iti-tju.org/mitosisdetection), which was held at the workshop of computer vision for microscopy image analysis (CVMI) in CVPR 2019. This contest aims to promote research on spatiotemporal mitosis detection under microscopy images. In this contest, we released a large-scale time-lapse phase-contrast microscopy image dataset (C2C12-16) for the mitosis detection task. Compared with the previous popular datasets (e.g., C2C12, C3H10), C2C12-16 contains more annotated mitotic events and more diverse cell culture environments. A total of ten different mitosis detection methods were submitted in the contest and evaluated on the test sets of four different cell culture environments in C2C12-16. In this benchmark, we describe all methods and conduct a thorough analysis based on their performances and discuss a feasible direction for mitosis detection. To the best of our knowledge, this is the first benchmark for the mitosis detection problem using a time-lapse phase-contrast microscopy spatiotemporal image sequence model.
Collapse
|
10
|
Xian RP, Acremann Y, Agustsson SY, Dendzik M, Bühlmann K, Curcio D, Kutnyakhov D, Pressacco F, Heber M, Dong S, Pincelli T, Demsar J, Wurth W, Hofmann P, Wolf M, Scheidgen M, Rettig L, Ernstorfer R. An open-source, end-to-end workflow for multidimensional photoemission spectroscopy. Sci Data 2020; 7:442. [PMID: 33335108 PMCID: PMC7746702 DOI: 10.1038/s41597-020-00769-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Accepted: 11/13/2020] [Indexed: 12/18/2022] Open
Abstract
Characterization of the electronic band structure of solid state materials is routinely performed using photoemission spectroscopy. Recent advancements in short-wavelength light sources and electron detectors give rise to multidimensional photoemission spectroscopy, allowing parallel measurements of the electron spectral function simultaneously in energy, two momentum components and additional physical parameters with single-event detection capability. Efficient processing of the photoelectron event streams at a rate of up to tens of megabytes per second will enable rapid band mapping for materials characterization. We describe an open-source workflow that allows user interaction with billion-count single-electron events in photoemission band mapping experiments, compatible with beamlines at 3rd and 4rd generation light sources and table-top laser-based setups. The workflow offers an end-to-end recipe from distributed operations on single-event data to structured formats for downstream scientific tasks and storage to materials science database integration. Both the workflow and processed data can be archived for reuse, providing the infrastructure for documenting the provenance and lineage of photoemission data for future high-throughput experiments.
Collapse
Affiliation(s)
- R Patrick Xian
- Fritz Haber Institute of the Max Planck Society, 14195, Berlin, Germany.
| | - Yves Acremann
- Laboratory for Solid State Physics, ETH Zurich, 8093, Zurich, Switzerland
| | | | - Maciej Dendzik
- Fritz Haber Institute of the Max Planck Society, 14195, Berlin, Germany
| | - Kevin Bühlmann
- Laboratory for Solid State Physics, ETH Zurich, 8093, Zurich, Switzerland
| | - Davide Curcio
- Department of Physics and Astronomy, Interdisciplinary Nanoscience Center (iNANO), Aarhus University, 8000, Aarhus C, Denmark
| | | | - Federico Pressacco
- DESY Photon Science, 22607, Hamburg, Germany
- Department of Physics, University of Hamburg, 22761, Hamburg, Germany
| | | | - Shuo Dong
- Fritz Haber Institute of the Max Planck Society, 14195, Berlin, Germany
| | - Tommaso Pincelli
- Fritz Haber Institute of the Max Planck Society, 14195, Berlin, Germany
| | - Jure Demsar
- Institute of Physics, University of Mainz, 55128, Mainz, Germany
| | - Wilfried Wurth
- DESY Photon Science, 22607, Hamburg, Germany
- Department of Physics, University of Hamburg, 22761, Hamburg, Germany
| | - Philip Hofmann
- Department of Physics and Astronomy, Interdisciplinary Nanoscience Center (iNANO), Aarhus University, 8000, Aarhus C, Denmark
| | - Martin Wolf
- Fritz Haber Institute of the Max Planck Society, 14195, Berlin, Germany
| | - Markus Scheidgen
- Fritz Haber Institute of the Max Planck Society, 14195, Berlin, Germany
- Department of Physics, Humboldt University of Berlin, 12489, Berlin, Germany
| | - Laurenz Rettig
- Fritz Haber Institute of the Max Planck Society, 14195, Berlin, Germany.
| | - Ralph Ernstorfer
- Fritz Haber Institute of the Max Planck Society, 14195, Berlin, Germany.
| |
Collapse
|
11
|
Nguyen T, Bui V, Thai A, Lam V, Raub CB, Chang LC, Nehmetallah G. Virtual organelle self-coding for fluorescence imaging via adversarial learning. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:JBO-200126RR. [PMID: 32996300 PMCID: PMC7522603 DOI: 10.1117/1.jbo.25.9.096009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 09/09/2020] [Indexed: 06/11/2023]
Abstract
SIGNIFICANCE Our study introduces an application of deep learning to virtually generate fluorescence images to reduce the burdens of cost and time from considerable effort in sample preparation related to chemical fixation and staining. AIM The objective of our work was to determine how successfully deep learning methods perform on fluorescence prediction that depends on structural and/or a functional relationship between input labels and output labels. APPROACH We present a virtual-fluorescence-staining method based on deep neural networks (VirFluoNet) to transform co-registered images of cells into subcellular compartment-specific molecular fluorescence labels in the same field-of-view. An algorithm based on conditional generative adversarial networks was developed and trained on microscopy datasets from breast-cancer and bone-osteosarcoma cell lines: MDA-MB-231 and U2OS, respectively. Several established performance metrics-the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural-similarity-index (SSIM)-as well as a novel performance metric, the tolerance level, were measured and compared for the same algorithm and input data. RESULTS For the MDA-MB-231 cells, F-actin signal performed the fluorescent antibody staining of vinculin prediction better than phase-contrast as an input. For the U2OS cells, satisfactory metrics of performance were archieved in comparison with ground truth. MAE is <0.005, 0.017, 0.012; PSNR is >40 / 34 / 33 dB; and SSIM is >0.925 / 0.926 / 0.925 for 4',6-diamidino-2-phenylindole/hoechst, endoplasmic reticulum, and mitochondria prediction, respectively, from channels of nucleoli and cytoplasmic RNA, Golgi plasma membrane, and F-actin. CONCLUSIONS These findings contribute to the understanding of the utility and limitations of deep learning image-regression to predict fluorescence microscopy datasets of biological cells. We infer that predicted image labels must have either a structural and/or a functional relationship to input labels. Furthermore, the approach introduced here holds promise for modeling the internal spatial relationships between organelles and biomolecules within living cells, leading to detection and quantification of alterations from a standard training dataset.
Collapse
Affiliation(s)
- Thanh Nguyen
- The Catholic University of America, Electrical Engineering and Computer Science Department, Washington, DC, United States
| | - Vy Bui
- The Catholic University of America, Electrical Engineering and Computer Science Department, Washington, DC, United States
| | - Anh Thai
- The Catholic University of America, Electrical Engineering and Computer Science Department, Washington, DC, United States
| | - Van Lam
- The Catholic University of America, Biomedical Engineering Department, Washington, DC, United States
| | - Christopher B. Raub
- The Catholic University of America, Biomedical Engineering Department, Washington, DC, United States
| | - Lin-Ching Chang
- The Catholic University of America, Electrical Engineering and Computer Science Department, Washington, DC, United States
| | - George Nehmetallah
- The Catholic University of America, Electrical Engineering and Computer Science Department, Washington, DC, United States
| |
Collapse
|
12
|
Nishimura K, Bise R. Spatial-Temporal Mitosis Detection in Phase-Contrast Microscopy via Likelihood Map Estimation by 3DCNN. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1811-1815. [PMID: 33018351 DOI: 10.1109/embc44109.2020.9175676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Automated mitotic detection in time-lapse phase-contrast microscopy provides us much information for cell behavior analysis, and thus several mitosis detection methods have been proposed. However, these methods still have two problems; 1) they cannot detect multiple mitosis events when there are closely placed. 2) they do not consider the annotation gaps, which may occur since the appearances of mitosis cells are very similar before and after the annotated frame. In this paper, we propose a novel mitosis detection method that can detect multiple mitosis events in a candidate sequence and mitigate the human annotation gap via estimating spatial-temporal likelihood map by 3DCNN. In this training, the loss gradually decreases with the gap size between ground-truth and estimation. This mitigates the annotation gaps. Our method outperformed the compared methods in terms of F1-score using challenging dataset that contains the data under four different conditions. Code is publicly available in https://github.com/naivete5656/MDMLM.
Collapse
|
13
|
Lu Y, Liu AA, Chen M, Nie WZ, Su YT. Sequential Saliency Guided Deep Neural Network for Joint Mitosis Identification and Localization in Time-Lapse Phase Contrast Microscopy Images. IEEE J Biomed Health Inform 2020; 24:1367-1378. [DOI: 10.1109/jbhi.2019.2943228] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
14
|
Fukaya S, Aoki K, Kobayashi M, Takemura M. Kinetic Analysis of the Motility of Giant Virus-Infected Amoebae Using Phase-Contrast Microscopic Images. Front Microbiol 2020; 10:3014. [PMID: 32038516 PMCID: PMC6988830 DOI: 10.3389/fmicb.2019.03014] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 12/16/2019] [Indexed: 12/13/2022] Open
Abstract
Tracking cell motility is a useful tool for the study of cell physiology and microbiology. Although phase-contrast microscopy is commonly used, the existence of optical artifacts called “halo” and “shade-off” have inhibited image analysis of moving cells. Here we show kinetic image analysis of Acanthamoeba motility using a newly developed computer program named “Phase-contrast-based Kinetic Analysis Algorithm for Amoebae (PKA3),” which revealed giant-virus-infected amoebae-specific motilities and aggregation profiles using time-lapse phase-contrast microscopic images. This program quantitatively detected the time-dependent, sequential changes in cellular number, size, shape, and direction and distance of cell motility. This method expands the potential of kinetic analysis of cultured cells using versatile phase-contrast images. Furthermore, this program could be a useful tool for investigating detailed kinetic mechanisms of cell motility, not only in virus-infected amoebae but also in other cells, including cancer cells, immune response cells, and neurons.
Collapse
Affiliation(s)
- Sho Fukaya
- Laboratory of Biology Education, Department of Mathematics and Science Education, Graduate School of Science, Tokyo University of Science, Tokyo, Japan
| | - Keita Aoki
- Laboratory of Biology Education, Department of Mathematics and Science Education, Graduate School of Science, Tokyo University of Science, Tokyo, Japan
| | - Mio Kobayashi
- Laboratory of Biology, Department of Liberal Arts, Faculty of Science, Tokyo University of Science, Tokyo, Japan
| | - Masaharu Takemura
- Laboratory of Biology Education, Department of Mathematics and Science Education, Graduate School of Science, Tokyo University of Science, Tokyo, Japan.,Laboratory of Biology, Department of Liberal Arts, Faculty of Science, Tokyo University of Science, Tokyo, Japan
| |
Collapse
|