1
|
Qureshi MH, Ozlu N, Bayraktar H. Adaptive tracking algorithm for trajectory analysis of cells and layer-by-layer assessment of motility dynamics. Comput Biol Med 2022; 150:106193. [PMID: 37859286 DOI: 10.1016/j.compbiomed.2022.106193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/26/2022] [Accepted: 10/08/2022] [Indexed: 11/03/2022]
Abstract
Tracking biological objects such as cells or subcellular components imaged with time-lapse microscopy enables us to understand the molecular principles about the dynamics of cell behaviors. However, automatic object detection, segmentation and extracting trajectories remain as a rate-limiting step due to intrinsic challenges of video processing. This paper presents an adaptive tracking algorithm (Adtari) that automatically finds the optimum search radius and cell linkages to determine trajectories in consecutive frames. A critical assumption in most tracking studies is that displacement remains unchanged throughout the movie and cells in a few frames are usually analyzed to determine its magnitude. Tracking errors and inaccurate association of cells may occur if the user does not correctly evaluate the value or prior knowledge is not present on cell movement. The key novelty of our method is that minimum intercellular distance and maximum displacement of cells between frames are dynamically computed and used to determine the threshold distance. Since the space between cells is highly variable in a given frame, our software recursively alters the magnitude to determine all plausible matches in the trajectory analysis. Our method therefore eliminates a major preprocessing step where a constant distance was used to determine the neighbor cells in tracking methods. Cells having multiple overlaps and splitting events were further evaluated by using the shape attributes including perimeter, area, ellipticity and distance. The features were applied to determine the closest matches by minimizing the difference in their magnitudes. Finally, reporting section of our software were used to generate instant maps by overlaying cell features and trajectories. Adtari was validated by using videos with variable signal-to-noise, contrast ratio and cell density. We compared the adaptive tracking with constant distance and other methods to evaluate performance and its efficiency. Our algorithm yields reduced mismatch ratio, increased ratio of whole cell track, higher frame tracking efficiency and allows layer-by-layer assessment of motility to characterize single-cells. Adaptive tracking provides a reliable, accurate, time efficient and user-friendly open source software that is well suited for analysis of 2D fluorescence microscopy video datasets.
Collapse
Affiliation(s)
- Mohammad Haroon Qureshi
- Department of Molecular Biology and Genetics, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey; Center for Translational Research, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey
| | - Nurhan Ozlu
- Department of Molecular Biology and Genetics, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey
| | - Halil Bayraktar
- Department of Molecular Biology and Genetics, Istanbul Technical University, Maslak, Sariyer, 34467, Istanbul, Turkey.
| |
Collapse
|
2
|
Arbelle A, Cohen S, Raviv TR. Dual-Task ConvLSTM-UNet for Instance Segmentation of Weakly Annotated Microscopy Videos. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; PP:1948-1960. [PMID: 35180079 DOI: 10.1109/tmi.2022.3152927] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Convolutional Neural Networks (CNNs) are considered state of the art segmentation methods for biomedical images in general and microscopy sequences of living cells, in particular. The success of the CNNs is attributed to their ability to capture the structural properties of the data, which enables accommodating complex spatial structures of the cells, low contrast, and unclear boundaries. However, in their standard form CNNs do not exploit the temporal information available in time-lapse sequences, which can be crucial to separating touching and partially overlapping cell instances. In this work, we exploit cell dynamics using a novel CNN architecture which allows multi-scale spatio-temporal feature extraction. Specifically, a novel recurrent neural network (RNN) architecture is proposed based on the integration of a Convolutional Long Short Term Memory (ConvLSTM) network with the U-Net. The proposed ConvLSTM-UNet network is constructed as a dual-task network to enable training with weakly annotated data, in the form of approximate cell centers, termed markers, when the complete cells' outlines are not available. We further use the fast marching method to facilitate the partitioning of clustered cells into individual connected components. Finally, we suggest an adaptation of the method for 3D microscopy sequences without drastically increasing the computational load. The method was evaluated on the Cell Segmentation Benchmark and was ranked among the top three methods on six submitted datasets. Exploiting the proposed built-in marker estimator we also present state-of-the-art cell detection results for an additional, publicly available, weekly annotated dataset. The source code is available at https://gitlab.com/shaked0/lstmUnet.
Collapse
|
3
|
Zhang J, Qi J, Zheng Z, Sun L. A robust image segmentation framework based on total variation spectral transform. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2021.12.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
4
|
An accurate cell tracking approach with self-regulated foraging behavior of ant colonies in dynamic microscopy images. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02424-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
5
|
Cheng HJ, Hsu CH, Hung CL, Lin CY. A review for Cell and Particle Tracking on Microscopy Images using Algorithms and Deep Learning Technologies. Biomed J 2021; 45:465-471. [PMID: 34628059 PMCID: PMC9421944 DOI: 10.1016/j.bj.2021.10.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 09/30/2021] [Accepted: 10/01/2021] [Indexed: 01/06/2023] Open
Abstract
Time-lapse microscopy images generated by biological experiments have been widely used for observing target activities, such as the motion trajectories and survival states. Based on these observations, biologists can conclude experimental results or present new hypotheses for several biological applications, i.e. virus research or drug design. Many methods or tools have been proposed in the past to observe cell and particle activities, which are defined as single cell tracking and single particle tracking problems, by using algorithms and deep learning technologies. In this article, a review for these works is presented in order to summarize the past methods and research topics at first, then points out the problems raised by these works, and finally proposes future research directions. The contributions of this article will help researchers to understand past development trends and further propose innovative technologies.
Collapse
Affiliation(s)
- Hui-Jun Cheng
- Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou 510095, China; Department of Computer Science and Information Engineering, Providence University, Taichung 43301, Taiwan
| | - Ching-Hsien Hsu
- Department of Computer Science and Information Engineering, Asia University, Taichung 41354, Taiwan; Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, School of Mathematics and Big Data, Foshan University, Foshan 528000, China; Department of Medical Research, China Medical University Hospital, China Medical University, Taiwan
| | - Che-Lun Hung
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan; Department of Computer Science and Communication Engineering, Providence University, Taichung 43301, Taiwan
| | - Chun-Yuan Lin
- Department of Computer Science and Information Engineering, Asia University, Taichung 41354, Taiwan; Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan 33302, Taiwan.
| |
Collapse
|
6
|
Löffler K, Scherr T, Mikut R. A graph-based cell tracking algorithm with few manually tunable parameters and automated segmentation error correction. PLoS One 2021; 16:e0249257. [PMID: 34492015 PMCID: PMC8423278 DOI: 10.1371/journal.pone.0249257] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 08/03/2021] [Indexed: 11/29/2022] Open
Abstract
Automatic cell segmentation and tracking enables to gain quantitative insights into the processes driving cell migration. To investigate new data with minimal manual effort, cell tracking algorithms should be easy to apply and reduce manual curation time by providing automatic correction of segmentation errors. Current cell tracking algorithms, however, are either easy to apply to new data sets but lack automatic segmentation error correction, or have a vast set of parameters that needs either manual tuning or annotated data for parameter tuning. In this work, we propose a tracking algorithm with only few manually tunable parameters and automatic segmentation error correction. Moreover, no training data is needed. We compare the performance of our approach to three well-performing tracking algorithms from the Cell Tracking Challenge on data sets with simulated, degraded segmentation—including false negatives, over- and under-segmentation errors. Our tracking algorithm can correct false negatives, over- and under-segmentation errors as well as a mixture of the aforementioned segmentation errors. On data sets with under-segmentation errors or a mixture of segmentation errors our approach performs best. Moreover, without requiring additional manual tuning, our approach ranks several times in the top 3 on the 6th edition of the Cell Tracking Challenge.
Collapse
Affiliation(s)
- Katharina Löffler
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
- Institute of Biological and Chemical Systems - Biological Information Processing, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
- * E-mail:
| | - Tim Scherr
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Ralf Mikut
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| |
Collapse
|
7
|
Liu Q, Gaeta IM, Zhao M, Deng R, Jha A, Millis BA, Mahadevan-Jansen A, Tyska MJ, Huo Y. ASIST: Annotation-free synthetic instance segmentation and tracking by adversarial simulations. Comput Biol Med 2021; 134:104501. [PMID: 34107436 PMCID: PMC8263511 DOI: 10.1016/j.compbiomed.2021.104501] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 05/14/2021] [Accepted: 05/15/2021] [Indexed: 10/21/2022]
Abstract
BACKGROUND The quantitative analysis of microscope videos often requires instance segmentation and tracking of cellular and subcellular objects. The traditional method consists of two stages: (1) performing instance object segmentation of each frame, and (2) associating objects frame-by-frame. Recently, pixel-embedding-based deep learning approaches these two steps simultaneously as a single stage holistic solution. Pixel-embedding-based learning forces similar feature representation of pixels from the same object, while maximizing the difference of feature representations from different objects. However, such deep learning methods require consistent annotations not only spatially (for segmentation), but also temporally (for tracking). In computer vision, annotated training data with consistent segmentation and tracking is resource intensive, the severity of which is multiplied in microscopy imaging due to (1) dense objects (e.g., overlapping or touching), and (2) high dynamics (e.g., irregular motion and mitosis). Adversarial simulations have provided successful solutions to alleviate the lack of such annotations in dynamics scenes in computer vision, such as using simulated environments (e.g., computer games) to train real-world self-driving systems. METHODS In this paper, we propose an annotation-free synthetic instance segmentation and tracking (ASIST) method with adversarial simulation and single-stage pixel-embedding based learning. CONTRIBUTION The contribution of this paper is three-fold: (1) the proposed method aggregates adversarial simulations and single-stage pixel-embedding based deep learning (2) the method is assessed with both the cellular (i.e., HeLa cells); and subcellular (i.e., microvilli) objects; and (3) to the best of our knowledge, this is the first study to explore annotation-free instance segmentation and tracking study for microscope videos. RESULTS The ASIST method achieved an important step forward, when compared with fully supervised approaches: ASIST shows 7%-11% higher segmentation, detection and tracking performance on microvilli relative to fully supervised methods, and comparable performance on Hela cell videos.
Collapse
Affiliation(s)
- Quan Liu
- Vanderbilt University, Computer Science, Nashville, TN, 37215, USA
| | - Isabella M Gaeta
- Vanderbilt University, Cell and Developmental Biology, Nashville, TN, 37215, USA
| | - Mengyang Zhao
- Tufts University, Computer Science, Medford, MA, 02155, USA
| | - Ruining Deng
- Vanderbilt University, Computer Science, Nashville, TN, 37215, USA
| | - Aadarsh Jha
- Vanderbilt University, Computer Science, Nashville, TN, 37215, USA
| | - Bryan A Millis
- Vanderbilt University, Cell and Developmental Biology, Nashville, TN, 37215, USA
| | | | - Matthew J Tyska
- Vanderbilt University, Cell and Developmental Biology, Nashville, TN, 37215, USA
| | - Yuankai Huo
- Vanderbilt University, Computer Science, Nashville, TN, 37215, USA.
| |
Collapse
|
8
|
Cascarano P, Comes MC, Mencattini A, Parrini MC, Piccolomini EL, Martinelli E. Recursive Deep Prior Video: A super resolution algorithm for time-lapse microscopy of organ-on-chip experiments. Med Image Anal 2021; 72:102124. [PMID: 34157611 DOI: 10.1016/j.media.2021.102124] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Revised: 05/26/2021] [Accepted: 05/28/2021] [Indexed: 01/23/2023]
Abstract
Biological experiments based on organ-on-chips (OOCs) exploit light Time-Lapse Microscopy (TLM) for a direct observation of cell movement that is an observable signature of underlying biological processes. A high spatial resolution is essential to capture cell dynamics and interactions from recorded experiments by TLM. Unfortunately, due to physical and cost limitations, acquiring high resolution videos is not always possible. To overcome the problem, we present here a new deep learning-based algorithm that extends the well-known Deep Image Prior (DIP) to TLM Video Super Resolution without requiring any training. The proposed Recursive Deep Prior Video method introduces some novelties. The weights of the DIP network architecture are initialized for each of the frames according to a new recursive updating rule combined with an efficient early stopping criterion. Moreover, the DIP loss function is penalized by two different Total Variation-based terms. The method has been validated on synthetic, i.e., artificially generated, as well as real videos from OOC experiments related to tumor-immune interaction. The achieved results are compared with several state-of-the-art trained deep learning Super Resolution algorithms showing outstanding performances.
Collapse
Affiliation(s)
- Pasquale Cascarano
- Department of Mathematics, University of Bologna, Piazza di Porta S. Donato 5, Bologna 40126, Italy
| | - Maria Colomba Comes
- Department of Electronic Engineering, University of Tor Vergata, Via del Politecnico 1, Rome 00133, Italy; Interdisciplinary Center for Advanced Studies on Lab-on-Chip and Organ-on-Chip Applications (ICLOC), University of Tor Vergata, Via del Politecnico 1, Rome 00133, Italy.
| | - Arianna Mencattini
- Department of Electronic Engineering, University of Tor Vergata, Via del Politecnico 1, Rome 00133, Italy; Interdisciplinary Center for Advanced Studies on Lab-on-Chip and Organ-on-Chip Applications (ICLOC), University of Tor Vergata, Via del Politecnico 1, Rome 00133, Italy
| | - Maria Carla Parrini
- Institute Curie, Centre de Recherche, Paris Sciences et Lettres Research University, Paris 75005, France
| | - Elena Loli Piccolomini
- Department of Computer Science and Engineering, Mura Anteo Zamboni 7, Bologna 40126, Italy
| | - Eugenio Martinelli
- Department of Electronic Engineering, University of Tor Vergata, Via del Politecnico 1, Rome 00133, Italy; Interdisciplinary Center for Advanced Studies on Lab-on-Chip and Organ-on-Chip Applications (ICLOC), University of Tor Vergata, Via del Politecnico 1, Rome 00133, Italy
| |
Collapse
|
9
|
Tian C, Yang C, Spencer SL. EllipTrack: A Global-Local Cell-Tracking Pipeline for 2D Fluorescence Time-Lapse Microscopy. Cell Rep 2021; 32:107984. [PMID: 32755578 DOI: 10.1016/j.celrep.2020.107984] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 05/29/2020] [Accepted: 07/09/2020] [Indexed: 12/12/2022] Open
Abstract
Time-lapse microscopy provides an unprecedented opportunity to monitor single-cell dynamics. However, tracking cells for long periods remains a technical challenge, especially for multi-day, large-scale movies with rapid cell migration, high cell density, and drug treatments that alter cell morphology/behavior. Here, we present EllipTrack, a global-local cell-tracking pipeline optimized for tracking such movies. EllipTrack first implements a global track-linking algorithm to construct tracks that maximize the probability of cell lineages. Tracking mistakes are then corrected with a local track-correction module in which tracks generated by the global algorithm are systematically examined and amended if a more probable alternative can be found. Through benchmarking, we show that EllipTrack outperforms state-of-the-art cell trackers and generates nearly error-free cell lineages for multiple large-scale movies. In addition, EllipTrack can adapt to time- and cell-density-dependent changes in cell migration speeds and requires minimal training datasets. EllipTrack is available at https://github.com/tianchengzhe/EllipTrack.
Collapse
Affiliation(s)
- Chengzhe Tian
- Department of Biochemistry, University of Colorado Boulder, Boulder, CO 80303, USA; BioFrontiers Institute, University of Colorado Boulder, Boulder, CO 80303, USA.
| | - Chen Yang
- Department of Molecular, Cellular, and Developmental Biology, University of Colorado Boulder, Boulder, CO 80303, USA; BioFrontiers Institute, University of Colorado Boulder, Boulder, CO 80303, USA
| | - Sabrina L Spencer
- Department of Biochemistry, University of Colorado Boulder, Boulder, CO 80303, USA; BioFrontiers Institute, University of Colorado Boulder, Boulder, CO 80303, USA.
| |
Collapse
|
10
|
Youn S, Lee K, Son J, Yang IH, Hwang JY. Fully-automatic deep learning-based analysis for determination of the invasiveness of breast cancer cells in an acoustic trap. BIOMEDICAL OPTICS EXPRESS 2020; 11:2976-2995. [PMID: 32637236 PMCID: PMC7316006 DOI: 10.1364/boe.390558] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 04/28/2020] [Accepted: 04/28/2020] [Indexed: 05/03/2023]
Abstract
A single-beam acoustic trapping technique has been shown to be very useful for determining the invasiveness of suspended breast cancer cells in an acoustic trap with a manual calcium analysis method. However, for the rapid translation of the technology into the clinic, the development of an efficient/accurate analytical method is needed. We, therefore, develop a fully-automatic deep learning-based calcium image analysis algorithm for determining the invasiveness of suspended breast cancer cells using a single-beam acoustic trapping system. The algorithm allows to segment cells, find trapped cells, and quantify their calcium changes over time. For better segmentation of calcium fluorescent cells even with vague boundaries, a novel deep learning architecture with multi-scale/multi-channel convolution operations (MM-Net) is devised and constructed by a target inversion training method. The MM-Net outperforms other deep learning models in the cell segmentation. Also, a detection/quantification algorithm is developed and implemented to automatically determine the invasiveness of a trapped cell. For the evaluation of the algorithm, it is applied to quantify the invasiveness of breast cancer cells. The results show that the algorithm offers similar performance to the manual calcium analysis method for determining the invasiveness of cancer cells, suggesting that it may serve as a novel tool to automatically determine the invasiveness of cancer cells with high-efficiency.
Collapse
Affiliation(s)
- Sangyeon Youn
- Daegu Gyeongbuk Institute of Science and Technology,Department of Information and Communication Engineering, 333 Techno Jungang-daero, Hyeonpung-myun, Dalseong-gun, Daegu, 42988, South Korea
- S. Youn and K. Lee are equally contributed to this study
| | - Kyungsu Lee
- Daegu Gyeongbuk Institute of Science and Technology,Department of Information and Communication Engineering, 333 Techno Jungang-daero, Hyeonpung-myun, Dalseong-gun, Daegu, 42988, South Korea
- S. Youn and K. Lee are equally contributed to this study
| | - Jeehoon Son
- Daegu Gyeongbuk Institute of Science and Technology,Department of Information and Communication Engineering, 333 Techno Jungang-daero, Hyeonpung-myun, Dalseong-gun, Daegu, 42988, South Korea
| | - In-Hwan Yang
- Kyonggi University, Department of Chemical Engineering, 154-42, Gwanggyosan-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16227, South Korea
| | - Jae Youn Hwang
- Daegu Gyeongbuk Institute of Science and Technology,Department of Information and Communication Engineering, 333 Techno Jungang-daero, Hyeonpung-myun, Dalseong-gun, Daegu, 42988, South Korea
| |
Collapse
|
11
|
Gilad T, Reyes J, Chen JY, Lahav G, Riklin Raviv T. Fully unsupervised symmetry-based mitosis detection in time-lapse cell microscopy. Bioinformatics 2019; 35:2644-2653. [PMID: 30590471 PMCID: PMC6662301 DOI: 10.1093/bioinformatics/bty1034] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2018] [Revised: 11/30/2018] [Accepted: 12/20/2018] [Indexed: 11/14/2022] Open
Abstract
MOTIVATION Cell microscopy datasets have great diversity due to variability in cell types, imaging techniques and protocols. Existing methods are either tailored to specific datasets or are based on supervised learning, which requires comprehensive manual annotations. Using the latter approach, however, poses a significant difficulty due to the imbalance between the number of mitotic cells with respect to the entire cell population in a time-lapse microscopy sequence. RESULTS We present a fully unsupervised framework for both mitosis detection and mother-daughters association in fluorescence microscopy data. The proposed method accommodates the difficulty of the different cell appearances and dynamics. Addressing symmetric cell divisions, a key concept is utilizing daughters' similarity. Association is accomplished by defining cell neighborhood via a stochastic version of the Delaunay triangulation and optimization by dynamic programing. Our framework presents promising detection results for a variety of fluorescence microscopy datasets of different sources, including 2D and 3D sequences from the Cell Tracking Challenge. AVAILABILITY AND IMPLEMENTATION Code is available in github (github.com/topazgl/mitodix). SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Topaz Gilad
- Department of Electrical and Computer Engineering and the Zlotwoski Center for Neuroscience, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Jose Reyes
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
| | - Jia-Yun Chen
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
| | - Galit Lahav
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
| | - Tammy Riklin Raviv
- Department of Electrical and Computer Engineering and the Zlotwoski Center for Neuroscience, Ben-Gurion University of the Negev, Beersheba, Israel
| |
Collapse
|
12
|
Payer C, Štern D, Feiner M, Bischof H, Urschler M. Segmenting and tracking cell instances with cosine embeddings and recurrent hourglass networks. Med Image Anal 2019; 57:106-119. [PMID: 31299493 DOI: 10.1016/j.media.2019.06.015] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 06/05/2019] [Accepted: 06/26/2019] [Indexed: 11/28/2022]
Abstract
Differently to semantic segmentation, instance segmentation assigns unique labels to each individual instance of the same object class. In this work, we propose a novel recurrent fully convolutional network architecture for tracking such instance segmentations over time, which is highly relevant, e.g., in biomedical applications involving cell growth and migration. Our network architecture incorporates convolutional gated recurrent units (ConvGRU) into a stacked hourglass network to utilize temporal information, e.g., from microscopy videos. Moreover, we train our network with a novel embedding loss based on cosine similarities, such that the network predicts unique embeddings for every instance throughout videos, even in the presence of dynamic structural changes due to mitosis of cells. To create the final tracked instance segmentations, the pixel-wise embeddings are clustered among subsequent video frames by using the mean shift algorithm. After showing the performance of the instance segmentation on a static in-house dataset of muscle fibers from H&E-stained microscopy images, we also evaluate our proposed recurrent stacked hourglass network regarding instance segmentation and tracking performance on six datasets from the ISBI celltracking challenge, where it delivers state-of-the-art results.
Collapse
Affiliation(s)
- Christian Payer
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Darko Štern
- Ludwig Boltzmann Institute for Clinical Forensic Imaging, Graz, Austria
| | - Marlies Feiner
- Division of Phoniatrics, Medical University Graz, Graz, Austria
| | - Horst Bischof
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Martin Urschler
- Ludwig Boltzmann Institute for Clinical Forensic Imaging, Graz, Austria; Department of Computer Science, The University of Auckland, New Zealand.
| |
Collapse
|
13
|
Wang W, Taft DA, Chen YJ, Zhang J, Wallace CT, Xu M, Watkins SC, Xing J. Learn to segment single cells with deep distance estimator and deep cell detector. Comput Biol Med 2019; 108:133-141. [PMID: 31005005 PMCID: PMC6781873 DOI: 10.1016/j.compbiomed.2019.04.006] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 04/04/2019] [Accepted: 04/04/2019] [Indexed: 01/03/2023]
Abstract
Single cell segmentation is a critical and challenging step in cell imaging analysis. Traditional processing methods require time and labor to manually fine-tune parameters and lack parameter transferability between different situations. Recently, deep convolutional neural networks (CNN) treat segmentation as a pixel-wise classification problem and have become a general and efficient method for image segmentation. However, cell imaging data often possesses characteristics that adversely affect segmentation accuracy: absence of established training datasets, few pixels on cell boundaries, and ubiquitous blurry features. We developed a strategy that combines strengths of CNN and traditional watershed algorithm. First, we trained a CNN to learn Euclidean distance transform (EDT) of the mask corresponding to the input images (deep distance estimator). Next, we trained a faster R-CNN (Region with CNN) to detect individual cells in the EDT image (deep cell detector). Then, the watershed algorithm performed the final segmentation using the outputs of previous two steps. Tests on a library of fluorescence, phase contrast and differential interference contrast (DIC) images showed that both the combined method and various forms of the pixel-wise classification algorithm achieved similar pixel-wise accuracy. However, the combined method achieved significantly higher cell count accuracy than the pixel-wise classification algorithm did, with the latter performing poorly when separating connected cells, especially those connected by blurry boundaries. This difference is most obvious when applied to noisy images of densely packed cells. Furthermore, both deep distance estimator and deep cell detector converge fast and are easy to train.
Collapse
Affiliation(s)
- Weikang Wang
- Department of Computational and System Biology, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
| | - David A Taft
- Department of Computational and System Biology, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
| | - Yi-Jiun Chen
- Department of Computational and System Biology, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
| | - Jingyu Zhang
- Department of Computational and System Biology, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
| | - Callen T Wallace
- Department of Cell Biology, and Center for Biologic Imaging, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| | - Min Xu
- Department of Computational Biology, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Simon C Watkins
- Department of Cell Biology, and Center for Biologic Imaging, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| | - Jianhua Xing
- Department of Computational and System Biology, University of Pittsburgh, Pittsburgh, PA, 15260, USA; UPMC-Hillman Cancer Center, University of Pittsburgh, Pittsburgh, 15232, PA, USA.
| |
Collapse
|
14
|
Kensert A, Harrison PJ, Spjuth O. Transfer Learning with Deep Convolutional Neural Networks for Classifying Cellular Morphological Changes. SLAS DISCOVERY 2019; 24:466-475. [PMID: 30641024 PMCID: PMC6484664 DOI: 10.1177/2472555218818756] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The quantification and identification of cellular phenotypes from high-content microscopy images has proven to be very useful for understanding biological activity in response to different drug treatments. The traditional approach has been to use classical image analysis to quantify changes in cell morphology, which requires several nontrivial and independent analysis steps. Recently, convolutional neural networks have emerged as a compelling alternative, offering good predictive performance and the possibility to replace traditional workflows with a single network architecture. In this study, we applied the pretrained deep convolutional neural networks ResNet50, InceptionV3, and InceptionResnetV2 to predict cell mechanisms of action in response to chemical perturbations for two cell profiling datasets from the Broad Bioimage Benchmark Collection. These networks were pretrained on ImageNet, enabling much quicker model training. We obtain higher predictive accuracy than previously reported, between 95% and 97%. The ability to quickly and accurately distinguish between different cell morphologies from a scarce amount of labeled data illustrates the combined benefit of transfer learning and deep convolutional neural networks for interrogating cell-based images.
Collapse
Affiliation(s)
- Alexander Kensert
- 1 Department of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden
| | - Philip J Harrison
- 1 Department of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden
| | - Ola Spjuth
- 1 Department of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden
| |
Collapse
|