1
|
Liu R, Dai W, Wu C, Wu T, Wang M, Zhou J, Zhang X, Li WJ, Liu J. Deep Learning-Based Microscopic Cell Detection Using Inverse Distance Transform and Auxiliary Counting. IEEE J Biomed Health Inform 2024; 28:6092-6104. [PMID: 38900626 DOI: 10.1109/jbhi.2024.3417229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/22/2024]
Abstract
Microscopic cell detection is a challenging task due to significant inter-cell occlusions in dense clusters and diverse cell morphologies. This paper introduces a novel framework designed to enhance automated cell detection. The proposed approach integrates a deep learning model that produces an inverse distance transform-based detection map from the given image, accompanied by a secondary network designed to regress a cell density map from the same input. The inverse distance transform-based map effectively highlights each cell instance in the densely populated areas, while the density map accurately estimates the total cell count in the image. Then, a custom counting-aided cell center extraction strategy leverages the cell count obtained by integrating over the density map to refine the detection process, significantly reducing false responses and thereby boosting overall accuracy. The proposed framework demonstrated superior performance with F-scores of 96.93%, 91.21%, and 92.00% on the VGG, MBM, and ADI datasets, respectively, surpassing existing state-of-the-art methods. It also achieved the lowest distance error, further validating the effectiveness of the proposed approach. These results demonstrate significant potential for automated cell analysis in biomedical applications.
Collapse
|
2
|
Ding Y, Zheng Y, Han Z, Yang X. Using optimal transport theory to optimize a deep convolutional neural network microscopic cell counting method. Med Biol Eng Comput 2023; 61:2939-2950. [PMID: 37532907 DOI: 10.1007/s11517-023-02862-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 05/17/2023] [Indexed: 08/04/2023]
Abstract
Medical image processing has become increasingly important in recent years, particularly in the field of microscopic cell imaging. However, accurately counting the number of cells in an image can be a challenging task due to the significant variations in cell size and shape. To tackle this problem, many existing methods rely on deep learning techniques, such as convolutional neural networks (CNNs), to count cells in an image or use regression counting methods to learn the similarities between an input image and a predicted cell image density map. In this paper, we propose a novel approach to monitor the cell counting process by optimizing the loss function using the optimal transport method, a rigorous measure to calculate the difference between the predicted count map and the dot annotation map generated by the CNN. We evaluated our algorithm on three publicly available cell count benchmarks: the synthetic fluorescence microscopy (VGG) dataset, the modified bone marrow (MBM) dataset, and the human subcutaneous adipose tissue (ADI) dataset. Our method outperforms other state-of-the-art methods, achieving a mean absolute error (MAE) of 2.3, 4.8, and 13.1 on the VGG, MBM, and ADI datasets, respectively, with smaller standard deviations. By using the optimal transport method, our approach provides a more accurate and reliable cell counting method for medical image processing.
Collapse
Affiliation(s)
- Yuanyuan Ding
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, Shandong, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, Shandong, China.
| | - Zeyu Han
- School of Mathematics and Statistics, Shandong University (Weihai), Weihai, 264209, Shandong, China
| | - Xinbo Yang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, Shandong, China
| |
Collapse
|
3
|
MACnet: Mask Augmented Counting Network for Class-Agnostic Counting. Pattern Recognit Lett 2023. [DOI: 10.1016/j.patrec.2023.03.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/18/2023]
|
4
|
Alzoubi I, Bao G, Zheng Y, Wang X, Graeber MB. Artificial intelligence techniques for neuropathological diagnostics and research. Neuropathology 2022. [PMID: 36443935 DOI: 10.1111/neup.12880] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 10/17/2022] [Accepted: 10/23/2022] [Indexed: 12/03/2022]
Abstract
Artificial intelligence (AI) research began in theoretical neurophysiology, and the resulting classical paper on the McCulloch-Pitts mathematical neuron was written in a psychiatry department almost 80 years ago. However, the application of AI in digital neuropathology is still in its infancy. Rapid progress is now being made, which prompted this article. Human brain diseases represent distinct system states that fall outside the normal spectrum. Many differ not only in functional but also in structural terms, and the morphology of abnormal nervous tissue forms the traditional basis of neuropathological disease classifications. However, only a few countries have the medical specialty of neuropathology, and, given the sheer number of newly developed histological tools that can be applied to the study of brain diseases, a tremendous shortage of qualified hands and eyes at the microscope is obvious. Similarly, in neuroanatomy, human observers no longer have the capacity to process the vast amounts of connectomics data. Therefore, it is reasonable to assume that advances in AI technology and, especially, whole-slide image (WSI) analysis will greatly aid neuropathological practice. In this paper, we discuss machine learning (ML) techniques that are important for understanding WSI analysis, such as traditional ML and deep learning, introduce a recently developed neuropathological AI termed PathoFusion, and present thoughts on some of the challenges that must be overcome before the full potential of AI in digital neuropathology can be realized.
Collapse
Affiliation(s)
- Islam Alzoubi
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Guoqing Bao
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Yuqi Zheng
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| | - Xiuying Wang
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Manuel B. Graeber
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| |
Collapse
|
5
|
Guo Y, Krupa O, Stein J, Wu G, Krishnamurthy A. SAU-Net: A Unified Network for Cell Counting in 2D and 3D Microscopy Images. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:1920-1932. [PMID: 34133284 PMCID: PMC8924707 DOI: 10.1109/tcbb.2021.3089608] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Image-based cell counting is a fundamental yet challenging task with wide applications in biological research. In this paper, we propose a novel unified deep network framework designed to solve this problem for various cell types in both 2D and 3D images. Specifically, we first propose SAU-Net for cell counting by extending the segmentation network U-Net with a Self-Attention module. Second, we design an extension of Batch Normalization (BN) to facilitate the training process for small datasets. In addition, a new 3D benchmark dataset based on the existing mouse blastocyst (MBC) dataset is developed and released to the community. Our SAU-Net achieves state-of-the-art results on four benchmark 2D datasets - synthetic fluorescence microscopy (VGG) dataset, Modified Bone Marrow (MBM) dataset, human subcutaneous adipose tissue (ADI) dataset, and Dublin Cell Counting (DCC) dataset, and the new 3D dataset, MBC. The BN extension is validated using extensive experiments on the 2D datasets, since GPU memory constraints preclude use of 3D datasets. The source code is available at https://github.com/mzlr/sau-net.
Collapse
|
6
|
Learning to count biological structures with raters’ uncertainty. Med Image Anal 2022; 80:102500. [DOI: 10.1016/j.media.2022.102500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 05/22/2022] [Accepted: 05/25/2022] [Indexed: 11/21/2022]
|
7
|
Chen Y, Liang D, Bai X, Xu Y, Yang X. Cell Localization and Counting Using Direction Field Map. IEEE J Biomed Health Inform 2021; 26:359-368. [PMID: 34406952 DOI: 10.1109/jbhi.2021.3105545] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automatic cell counting in pathology images is challenging due to blurred boundaries, low-contrast, and overlapping between cells. In this paper, we train a convolutional neural network (CNN) to predict a two-dimensional direction field map and then use it to localize cell individuals for counting. Specifically, we define a direction field on each pixel in the cell regions (obtained by dilating the original annotation in terms of cell centers) as a two-dimensional unit vector pointing from the pixel to its corresponding cell center. Direction field for adjacent pixels in different cells have opposite directions departing from each other, while those in the same cell region have directions pointing to the same center. Such unique property is used to partition overlapped cells for localization and counting. To deal with those blurred boundaries or low contrast cells, we set the direction field of the background pixels to be zeros in the ground-truth generation. Thus, adjacent pixels belonging to cells and background will have an obvious difference in the predicted direction field. To further deal with cells of varying density and overlapping issues, we adopt geometry adaptive (varying) radius for cells of different densities in the generation of ground-truth direction field map, which guides the CNN model to separate cells of different densities and overlapping cells. Extensive experimental results on three widely used datasets (i.e., Cell, CRCHistoPhenotype2016, and MBM datasets) demonstrate the effectiveness of the proposed approach.
Collapse
|
8
|
Zhang Q, Yun KK, Wang H, Yoon SW, Lu F, Won D. Automatic cell counting from stimulated Raman imaging using deep learning. PLoS One 2021; 16:e0254586. [PMID: 34288972 PMCID: PMC8294532 DOI: 10.1371/journal.pone.0254586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Accepted: 06/29/2021] [Indexed: 11/28/2022] Open
Abstract
In this paper, we propose an automatic cell counting framework for stimulated Raman scattering (SRS) images, which can assist tumor tissue characteristic analysis, cancer diagnosis, and surgery planning processes. SRS microscopy has promoted tumor diagnosis and surgery by mapping lipids and proteins from fresh specimens and conducting a fast disclose of fundamental diagnostic hallmarks of tumors with a high resolution. However, cell counting from label-free SRS images has been challenging due to the limited contrast of cells and tissue, along with the heterogeneity of tissue morphology and biochemical compositions. To this end, a deep learning-based cell counting scheme is proposed by modifying and applying U-Net, an effective medical image semantic segmentation model that uses a small number of training samples. The distance transform and watershed segmentation algorithms are also implemented to yield the cell instance segmentation and cell counting results. By performing cell counting on SRS images of real human brain tumor specimens, promising cell counting results are obtained with > 98% of area under the curve (AUC) and R = 0.97 in terms of cell counting correlation between SRS and histological images with hematoxylin and eosin (H&E) staining. The proposed cell counting scheme illustrates the possibility and potential of performing cell counting automatically in near real time and encourages the study of applying deep learning techniques in biomedical and pathological image analyses.
Collapse
Affiliation(s)
- Qianqian Zhang
- Department of System Science and Industrial Engineering, State University of New York at Binghamton, Binghamton, NY, United States of America
| | - Kyung Keun Yun
- Department of System Science and Industrial Engineering, State University of New York at Binghamton, Binghamton, NY, United States of America
| | - Hao Wang
- Department of System Science and Industrial Engineering, State University of New York at Binghamton, Binghamton, NY, United States of America
| | - Sang Won Yoon
- Department of System Science and Industrial Engineering, State University of New York at Binghamton, Binghamton, NY, United States of America
| | - Fake Lu
- Department of Biomedical Engineering, State University of New York at Binghamton, Binghamton, NY, United States of America
| | - Daehan Won
- Department of System Science and Industrial Engineering, State University of New York at Binghamton, Binghamton, NY, United States of America
| |
Collapse
|
9
|
Stallmann D, Göpfert JP, Schmitz J, Grünberger A, Hammer B. Towards an automatic analysis of CHO-K1 suspension growth in microfluidic single-cell cultivation. Bioinformatics 2021; 37:3632-3639. [PMID: 34019074 DOI: 10.1093/bioinformatics/btab386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 05/04/2021] [Accepted: 05/17/2021] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION Innovative microfluidic systems carry the promise to greatly facilitate spatio-temporal analysis of single cells under well-defined environmental conditions, allowing novel insights into population heterogeneity and opening new opportunities for fundamental and applied biotechnology. Microfluidics experiments, however, are accompanied by vast amounts of data, such as time series of microscopic images, for which manual evaluation is infeasible due to the sheer number of samples. While classical image processing technologies do not lead to satisfactory results in this domain, modern deep learning technologies such as convolutional networks can be sufficiently versatile for diverse tasks, including automatic cell counting as well as the extraction of critical parameters, such as growth rate. However, for successful training, current supervised deep learning requires label information, such as the number or positions of cells for each image in a series; obtaining these annotations is very costly in this setting. RESULTS We propose a novel machine learning architecture together with a specialized training procedure, which allows us to infuse a deep neural network with human-powered abstraction on the level of data, leading to a high-performing regression model that requires only a very small amount of labeled data. Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated. AVAILABILITY The project is cross-platform, open-source and free (MIT licensed) software. We make the source code available at https://github.com/dstallmann/cell_cultivation_analysis; the data set is available at https://pub.uni-bielefeld.de/record/2945513.
Collapse
Affiliation(s)
| | - Jan P Göpfert
- Machine Learning Group, Bielefeld University, Germany
| | - Julian Schmitz
- Multiscale Bioengineering, Bielefeld University, Germany
| | | | | |
Collapse
|
10
|
Abstract
Histopathological images (HIs) are the gold standard for evaluating some types of tumors for cancer diagnosis. The analysis of such images is time and resource-consuming and very challenging even for experienced pathologists, resulting in inter-observer and intra-observer disagreements. One of the ways of accelerating such an analysis is to use computer-aided diagnosis (CAD) systems. This paper presents a review on machine learning methods for histopathological image analysis, including shallow and deep learning methods. We also cover the most common tasks in HI analysis, such as segmentation and feature extraction. Besides, we present a list of publicly available and private datasets that have been used in HI research.
Collapse
|
11
|
He S, Minn KT, Solnica-Krezel L, Anastasio MA, Li H. Deeply-supervised density regression for automatic cell counting in microscopy images. Med Image Anal 2021; 68:101892. [PMID: 33285481 PMCID: PMC7856299 DOI: 10.1016/j.media.2020.101892] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 11/06/2020] [Accepted: 11/07/2020] [Indexed: 12/21/2022]
Abstract
Accurately counting the number of cells in microscopy images is required in many medical diagnosis and biological studies. This task is tedious, time-consuming, and prone to subjective errors. However, designing automatic counting methods remains challenging due to low image contrast, complex background, large variance in cell shapes and counts, and significant cell occlusions in two-dimensional microscopy images. In this study, we proposed a new density regression-based method for automatically counting cells in microscopy images. The proposed method processes two innovations compared to other state-of-the-art density regression-based methods. First, the density regression model (DRM) is designed as a concatenated fully convolutional regression network (C-FCRN) to employ multi-scale image features for the estimation of cell density maps from given images. Second, auxiliary convolutional neural networks (AuxCNNs) are employed to assist in the training of intermediate layers of the designed C-FCRN to improve the DRM performance on unseen datasets. Experimental studies evaluated on four datasets demonstrate the superior performance of the proposed method.
Collapse
Affiliation(s)
- Shenghua He
- Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, MO 63110 USA
| | - Kyaw Thu Minn
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63110 USA; Department of Developmental Biology, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA
| | - Lilianna Solnica-Krezel
- Department of Developmental Biology, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA; Center of Regenerative Medicine, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA
| | - Mark A Anastasio
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA.
| | - Hua Li
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA; Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA; Carle Cancer Center, Carle Foundation Hospital, Urbana, IL 61801 USA.
| |
Collapse
|
12
|
A new convolutional neural network model for peripapillary atrophy area segmentation from retinal fundus images. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2019.105890] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
13
|
Guo Y, Wu G, Stein J, Krishnamurthy A. SAU-Net: A Universal Deep Network for Cell Counting. ACM-BCB ... ... : THE ... ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY AND BIOMEDICINE. ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY AND BIOMEDICINE 2019; 2019:299-306. [PMID: 34046647 PMCID: PMC8153189 DOI: 10.1145/3307339.3342153] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Image-based cell counting is a fundamental yet challenging task with wide applications in biological research. In this paper, we propose a novel Deep Network designed to universally solve this problem for various cell types. Specifically, we first extend the segmentation network, U-Net with a Self-Attention module, named SAU-Net, for cell counting. Second, we design an online version of Batch Normalization to mitigate the generalization gap caused by data augmentation in small datasets. We evaluate the proposed method on four public cell counting benchmarks - synthetic fluorescence microscopy (VGG) dataset, Modified Bone Marrow (MBM) dataset, human subcutaneous adipose tissue (ADI) dataset, and Dublin Cell Counting (DCC) dataset. Our method surpasses the current state-of-the-art performance in the three real datasets (MBM, ADI and DCC) and achieves competitive results in the synthetic dataset (VGG). The source code is available at https://github.com/mzlr/sau-net.
Collapse
Affiliation(s)
- Yue Guo
- University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Guorong Wu
- University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jason Stein
- University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | | |
Collapse
|
14
|
A novel generic dictionary-based denoising method for improving noisy and densely packed nuclei segmentation in 3D time-lapse fluorescence microscopy images. Sci Rep 2019; 9:5654. [PMID: 30948741 PMCID: PMC6449358 DOI: 10.1038/s41598-019-41683-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Accepted: 03/14/2019] [Indexed: 11/24/2022] Open
Abstract
Time-lapse fluorescence microscopy is an essential technique for quantifying various characteristics of cellular processes, i.e. cell survival, migration, and differentiation. To perform high-throughput quantification of cellular processes, nuclei segmentation and tracking should be performed in an automated manner. Nevertheless, nuclei segmentation and tracking are challenging tasks due to embedded noise, intensity inhomogeneity, shape variation as well as a weak boundary of nuclei. Although several nuclei segmentation approaches have been reported in the literature, dealing with embedded noise remains the most challenging part of any segmentation algorithm. We propose a novel denoising algorithm, based on sparse coding, that can both enhance very faint and noisy nuclei signal but simultaneously detect nuclei position accurately. Furthermore our method is based on a limited number of parameters, with only one being critical, which is the approximate size of the objects of interest. We also show that our denoising method coupled with classical segmentation method works properly in the context of the most challenging cases. To evaluate the performance of the proposed method, we tested our method on two datasets from the cell tracking challenge. Across all datasets, the proposed method achieved satisfactory results with 96:96% recall for the C. elegans dataset. Besides, in the Drosophila dataset, our method achieved very high recall (99:3%).
Collapse
|
15
|
Segmentation of Total Cell Area in Brightfield Microscopy Images. Methods Protoc 2018; 1:mps1040043. [PMID: 31164583 PMCID: PMC6481060 DOI: 10.3390/mps1040043] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 11/14/2018] [Accepted: 11/14/2018] [Indexed: 11/16/2022] Open
Abstract
Segmentation is one of the most important steps in microscopy image analysis. Unfortunately, most of the methods use fluorescence images for this task, which is not suitable for analysis that requires a knowledge of area occupied by cells and an experimental design that does not allow necessary labeling. In this protocol, we present a simple method, based on edge detection and morphological operations, that separates total area occupied by cells from the background using only brightfield channel image. The resulting segmented picture can be further used as a mask for fluorescence quantification and other analyses. The whole procedure is carried out in open source software Fiji.
Collapse
|
16
|
Koyuncu CF, Cetin-Atalay R, Gunduz-Demir C. Object-Oriented Segmentation of Cell Nuclei in Fluorescence Microscopy Images. Cytometry A 2018; 93:1019-1028. [PMID: 30211975 DOI: 10.1002/cyto.a.23594] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2017] [Revised: 06/14/2018] [Accepted: 07/30/2018] [Indexed: 12/17/2022]
Abstract
Cell nucleus segmentation remains an open and challenging problem especially to segment nuclei in cell clumps. Splitting a cell clump would be straightforward if the gradients of boundary pixels in-between the nuclei were always higher than the others. However, imperfections may exist: inhomogeneities of pixel intensities in a nucleus may cause to define spurious boundaries whereas insufficient pixel intensity differences at the border of overlapping nuclei may cause to miss some true boundary pixels. In contrast, these imperfections are typically observed at the pixel-level, causing local changes in pixel values without changing the semantics on a large scale. In response to these issues, this article introduces a new nucleus segmentation method that relies on using gradient information not at the pixel level but at the object level. To this end, it proposes to decompose an image into smaller homogeneous subregions, define edge-objects at four different orientations to encode the gradient information at the object level, and devise a merging algorithm, in which the edge-objects vote for subregion pairs along their orientations and the pairs are iteratively merged if they get sufficient votes from multiple orientations. Our experiments on fluorescence microscopy images reveal that this high-level representation and the design of a merging algorithm using edge-objects (gradients at the object level) improve the segmentation results.
Collapse
Affiliation(s)
| | - Rengul Cetin-Atalay
- Graduate School of Informatics, Middle East Technical University, 06800, Ankara, Turkey
| | - Cigdem Gunduz-Demir
- Computer Engineering Department, Bilkent University, 06800, Ankara, Turkey.,Neuroscience Graduate Program, Bilkent University, 06800, Ankara, Turkey
| |
Collapse
|
17
|
Hughes AJ, Mornin JD, Biswas SK, Beck LE, Bauer DP, Raj A, Bianco S, Gartner ZJ. Quanti.us: a tool for rapid, flexible, crowd-based annotation of images. Nat Methods 2018; 15:587-590. [PMID: 30065368 PMCID: PMC8863499 DOI: 10.1038/s41592-018-0069-0] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2017] [Accepted: 05/22/2018] [Indexed: 12/24/2022]
Abstract
We describe Quanti.us , a crowd-based image-annotation platform that provides an accurate alternative to computational algorithms for difficult image-analysis problems. We used Quanti.us for a variety of medium-throughput image-analysis tasks and achieved 10-50× savings in analysis time compared with that required for the same task by a single expert annotator. We show equivalent deep learning performance for Quanti.us-derived and expert-derived annotations, which should allow scalable integration with tailored machine learning algorithms.
Collapse
Affiliation(s)
- Alex J Hughes
- Department of Pharmaceutical Chemistry, University of California, San Francisco, San Francisco, CA, USA
- NSF Center for Cellular Construction, University of California, San Francisco, San Francisco, CA, USA
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Sujoy K Biswas
- NSF Center for Cellular Construction, University of California, San Francisco, San Francisco, CA, USA
- Department of Industrial and Applied Genomics, IBM Accelerated Discovery Laboratory, IBM Almaden Research Center, San Jose, CA, USA
| | - Lauren E Beck
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - David P Bauer
- NSF Center for Cellular Construction, University of California, San Francisco, San Francisco, CA, USA
- Department of Cellular and Molecular Pharmacology, University of California, San Francisco, San Francisco, CA, USA
| | - Arjun Raj
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Simone Bianco
- NSF Center for Cellular Construction, University of California, San Francisco, San Francisco, CA, USA
- Department of Industrial and Applied Genomics, IBM Accelerated Discovery Laboratory, IBM Almaden Research Center, San Jose, CA, USA
| | - Zev J Gartner
- Department of Pharmaceutical Chemistry, University of California, San Francisco, San Francisco, CA, USA.
- NSF Center for Cellular Construction, University of California, San Francisco, San Francisco, CA, USA.
- Chan Zuckerberg Biohub, San Francisco, CA, USA.
| |
Collapse
|
18
|
Siregar P, Julen N, Hufnagl P, Mutter GL. Computational morphogenesis – Embryogenesis, cancer research and digital pathology. Biosystems 2018; 169-170:40-54. [DOI: 10.1016/j.biosystems.2018.05.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Accepted: 05/25/2018] [Indexed: 01/14/2023]
|
19
|
Essa E, Xie X. Phase contrast cell detection using multilevel classification. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2018; 34:e2916. [PMID: 28755437 DOI: 10.1002/cnm.2916] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2016] [Revised: 06/14/2017] [Accepted: 07/20/2017] [Indexed: 06/07/2023]
Abstract
In this paper, we propose a fully automated learning-based approach for detecting cells in time-lapse phase contrast images. The proposed system combines 2 machine learning approaches to achieve bottom-up image segmentation. We apply pixel-wise classification using random forests (RF) classifiers to determine the potential location of the cells. Each pixel is classified into 4 categories (cell, mitotic cell, halo effect, and background noise). Various image features are extracted at different scales to train the RF classifier. The resulting probability map is partitioned using the k-means algorithm to form potential cell regions. These regions are expanded into the neighboring areas to recover some missing or broken cell regions. To validate the cell regions, another machine learning method based on the bag-of-features and spatial pyramid encoding is proposed. The result of the second classifier can be a validated cell, a merged cell, or a noncell. In the case that the cell region is classified as a merged cell, it is split by using the seeded watershed method. The proposed method is demonstrated on several phase contrast image datasets, ie, U2OS, HeLa, and NIH 3T3. In comparison to state-of-the-art cell detection techniques, the proposed method shows improved performance, particularly in dealing with noise interference and drastic shape variations.
Collapse
Affiliation(s)
- Ehab Essa
- Faculty of Computers and Information Sciences, Mansoura University, Egypt
| | - Xianghua Xie
- Department of Computer Science, Swansea University, UK
| |
Collapse
|
20
|
|
21
|
Xu M, Papageorgiou DP, Abidi SZ, Dao M, Zhao H, Karniadakis GE. A deep convolutional neural network for classification of red blood cells in sickle cell anemia. PLoS Comput Biol 2017; 13:e1005746. [PMID: 29049291 PMCID: PMC5654260 DOI: 10.1371/journal.pcbi.1005746] [Citation(s) in RCA: 78] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Accepted: 08/29/2017] [Indexed: 11/18/2022] Open
Abstract
Sickle cell disease (SCD) is a hematological disorder leading to blood vessel occlusion accompanied by painful episodes and even death. Red blood cells (RBCs) of SCD patients have diverse shapes that reveal important biomechanical and bio-rheological characteristics, e.g. their density, fragility, adhesive properties, etc. Hence, having an objective and effective way of RBC shape quantification and classification will lead to better insights and eventual better prognosis of the disease. To this end, we have developed an automated, high-throughput, ex-vivo RBC shape classification framework that consists of three stages. First, we present an automatic hierarchical RBC extraction method to detect the RBC region (ROI) from the background, and then separate touching RBCs in the ROI images by applying an improved random walk method based on automatic seed generation. Second, we apply a mask-based RBC patch-size normalization method to normalize the variant size of segmented single RBC patches into uniform size. Third, we employ deep convolutional neural networks (CNNs) to realize RBC classification; the alternating convolution and pooling operations can deal with non-linear and complex patterns. Furthermore, we investigate the specific shape factor quantification for the classified RBC image data in order to develop a general multiscale shape analysis. We perform several experiments on raw microscopy image datasets from 8 SCD patients (over 7,000 single RBC images) through a 5-fold cross validation method both for oxygenated and deoxygenated RBCs. We demonstrate that the proposed framework can successfully classify sickle shape RBCs in an automated manner with high accuracy, and we also provide the corresponding shape factor analysis, which can be used synergistically with the CNN analysis for more robust predictions. Moreover, the trained deep CNN exhibits good performance even for a deoxygenated dataset and distinguishes the subtle differences in texture alteration inside the oxygenated and deoxygenated RBCs.
Collapse
Affiliation(s)
- Mengjia Xu
- Key Laboratory of Medical Image Computing of Ministry of Education, Northeastern University, Shenyang, China
- Division of Applied Mathematics, Brown University, Providence, Rhode Island, United States of America
| | - Dimitrios P. Papageorgiou
- Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Sabia Z. Abidi
- Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Ming Dao
- Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Hong Zhao
- Key Laboratory of Medical Image Computing of Ministry of Education, Northeastern University, Shenyang, China
| | - George Em Karniadakis
- Division of Applied Mathematics, Brown University, Providence, Rhode Island, United States of America
- * E-mail:
| |
Collapse
|
22
|
Liu J, Jung H, Dubra A, Tam J. Automated Photoreceptor Cell Identification on Nonconfocal Adaptive Optics Images Using Multiscale Circular Voting. Invest Ophthalmol Vis Sci 2017; 58:4477-4489. [PMID: 28873173 PMCID: PMC5586244 DOI: 10.1167/iovs.16-21003] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2016] [Accepted: 07/11/2017] [Indexed: 12/15/2022] Open
Abstract
Purpose Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Methods Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. Results There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). Conclusions MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics.
Collapse
Affiliation(s)
- Jianfei Liu
- Ophthalmic Genetics and Visual Function Branch, National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States
| | - HaeWon Jung
- Ophthalmic Genetics and Visual Function Branch, National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, California, United States
| | - Johnny Tam
- Ophthalmic Genetics and Visual Function Branch, National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States
| |
Collapse
|
23
|
Hilsenbeck O, Schwarzfischer M, Loeffler D, Dimopoulos S, Hastreiter S, Marr C, Theis FJ, Schroeder T. fastER: a user-friendly tool for ultrafast and robust cell segmentation in large-scale microscopy. Bioinformatics 2017; 33:2020-2028. [DOI: 10.1093/bioinformatics/btx107] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Accepted: 02/21/2017] [Indexed: 12/20/2022] Open
Affiliation(s)
- Oliver Hilsenbeck
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland
| | | | - Dirk Loeffler
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland
| | - Sotiris Dimopoulos
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland
| | - Simon Hastreiter
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland
| | - Carsten Marr
- Institute of Computational Biology, Helmholtz Zentrum München, Neuherberg, Germany
| | - Fabian J Theis
- Institute of Computational Biology, Helmholtz Zentrum München, Neuherberg, Germany
- Department of Mathematics, Technische Universität München, Garching, Germany
| | - Timm Schroeder
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland
| |
Collapse
|
24
|
Nketia TA, Sailem H, Rohde G, Machiraju R, Rittscher J. Analysis of live cell images: Methods, tools and opportunities. Methods 2017; 115:65-79. [DOI: 10.1016/j.ymeth.2017.02.007] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Revised: 02/20/2017] [Accepted: 02/21/2017] [Indexed: 01/19/2023] Open
|
25
|
Florindo JB, Landini G, Bruno OM. Three-dimensional connectivity index for texture recognition. Pattern Recognit Lett 2016. [DOI: 10.1016/j.patrec.2016.09.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
26
|
Automatic detection and measurement of viral replication compartments by ellipse adjustment. Sci Rep 2016; 6:36505. [PMID: 27819325 PMCID: PMC5098162 DOI: 10.1038/srep36505] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Accepted: 10/13/2016] [Indexed: 01/03/2023] Open
Abstract
Viruses employ a variety of strategies to hijack cellular activities through the orchestrated recruitment of macromolecules to specific virus-induced cellular micro-environments. Adenoviruses (Ad) and other DNA viruses induce extensive reorganization of the cell nucleus and formation of nuclear Replication Compartments (RCs), where the viral genome is replicated and expressed. In this work an automatic algorithm designed for detection and segmentation of RCs using ellipses is presented. Unlike algorithms available in the literature, this approach is deterministic, automatic, and can adjust multiple RCs using ellipses. The proposed algorithm is non iterative, computationally efficient and is invariant to affine transformations. The method was validated over both synthetic images and more than 400 real images of Ad-infected cells at various timepoints of the viral replication cycle obtaining relevant information about the biogenesis of adenoviral RCs. As proof of concept the algorithm was then used to quantitatively compare RCs in cells infected with the adenovirus wild type or an adenovirus mutant that is null for expression of a viral protein that is known to affect activities associated with RCs that result in deficient viral progeny production.
Collapse
|
27
|
Xing F, Yang L. Robust Nucleus/Cell Detection and Segmentation in Digital Pathology and Microscopy Images: A Comprehensive Review. IEEE Rev Biomed Eng 2016; 9:234-63. [PMID: 26742143 PMCID: PMC5233461 DOI: 10.1109/rbme.2016.2515127] [Citation(s) in RCA: 213] [Impact Index Per Article: 26.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Digital pathology and microscopy image analysis is widely used for comprehensive studies of cell morphology or tissue structure. Manual assessment is labor intensive and prone to interobserver variations. Computer-aided methods, which can significantly improve the objectivity and reproducibility, have attracted a great deal of interest in recent literature. Among the pipeline of building a computer-aided diagnosis system, nucleus or cell detection and segmentation play a very important role to describe the molecular morphological information. In the past few decades, many efforts have been devoted to automated nucleus/cell detection and segmentation. In this review, we provide a comprehensive summary of the recent state-of-the-art nucleus/cell segmentation approaches on different types of microscopy images including bright-field, phase-contrast, differential interference contrast, fluorescence, and electron microscopies. In addition, we discuss the challenges for the current methods and the potential future work of nucleus/cell detection and segmentation.
Collapse
|