1
|
Chen X, Zheng H, Tang H, Li F. Multi-scale perceptual YOLO for automatic detection of clue cells and trichomonas in fluorescence microscopic images. Comput Biol Med 2024; 175:108500. [PMID: 38678942 DOI: 10.1016/j.compbiomed.2024.108500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 03/25/2024] [Accepted: 04/21/2024] [Indexed: 05/01/2024]
Abstract
Vaginitis is a common disease among women and has a high recurrence rate. The primary diagnosis method is fluorescence microscopic inspection, but manual inspection is inefficient and can lead to false detection or missed detection. Automatic cell identification and localization in microscopic images are necessary. For vaginitis diagnosis, clue cells and trichomonas are two important indicators and are difficult to be detected because of the different scales and image characteristics. This study proposes a Multi-Scale Perceptual YOLO (MSP-YOLO) with super-resolution reconstruction branch to meet the detection requirements of clue cells and trichomonas. Based on the scales and image characteristics of clue cells and trichomonas, we employed a super-resolution reconstruction branch to the detection network. This branch guides the detection branch to focus on subtle feature differences. Simultaneously, we proposed an attention-based feature fusion module that is injected with dilated convolutional group. This module makes the network pay attention to the non-centered features of the large target clue cells, which contributes to the enhancement of detection sensitivity. Experimental results show that the proposed detection network MSP-YOLO can improve sensitivity without compromising specificity. For clue cell and trichomoniasis detection, the proposed network achieved sensitivities of 0.706 and 0.910, respectively, which were 0.218 and 0.051 higher than those of the baseline model. In this study, the characteristics of the super-resolution reconstruction task are used to guide the network to effectively extract and process image features. The novel proposed network has an increased sensitivity, which makes it possible to detect vaginitis automatically.
Collapse
Affiliation(s)
- Xi Chen
- School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, China
| | - Haoyue Zheng
- School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, China
| | - Haodong Tang
- School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, China
| | - Fan Li
- School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, China.
| |
Collapse
|
2
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
3
|
Wu P, Weng H, Luo W, Zhan Y, Xiong L, Zhang H, Yan H. An improved Yolov5s based on transformer backbone network for detection and classification of bronchoalveolar lavage cells. Comput Struct Biotechnol J 2023; 21:2985-3001. [PMID: 37249972 PMCID: PMC10209489 DOI: 10.1016/j.csbj.2023.05.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 05/04/2023] [Accepted: 05/05/2023] [Indexed: 05/31/2023] Open
Abstract
Biological tissue information of the lung, such as cells and proteins, can be obtained from bronchoalveolar lavage fluid (BALF), through which it can be used as a complement to lung biopsy pathology. BALF cells can be confused with each other due to the similarity of their characteristics and differences in the way sections are handled or viewed. This poses a great challenge for cell detection. In this paper, An Improved Yolov5s Based on Transformer Backbone Network for Detection and Classification of BALF Cells is proposed, focusing on the detection of four types of cells in BALF: macrophages, lymphocytes, neutrophils and eosinophils. The network is mainly based on the Yolov5s network and uses Swin Transformer V2 technology in the backbone network to improve cell detection accuracy by obtaining global information; the C3Ghost module (a variant of the Convolutional Neural Network architecture) is used in the neck network to reduce the number of parameters during feature channel fusion and to improve feature expression performance. In addition, embedding intersection over union Loss (EIoU_Loss) was used as a bounding box regression loss function to speed up the bounding box regression rate, resulting in higher accuracy of the algorithm. The experiments showed that our model could achieve mAP of 81.29% and Recall of 80.47%. Compared to the original Yolov5s, the mAP has improved by 3.3% and Recall by 3.67%. We also compared it with Yolov7 and the newly launched Yolov8s. mAP improved by 0.02% and 2.36% over Yolov7 and Yolov8s respectively, while the FPS of our model was higher than both of them, achieving a balance of efficiency and accuracy, further demonstrating the superiority of our model.
Collapse
Affiliation(s)
- Puzhen Wu
- The Faculty of Architecture, Civil and Transportation Engineering, Beijing University of Technology, Beijing 100124, China
- Beijing-Dublin International College, Beijing University of Technology, Beijing 100124, China
| | - Han Weng
- Beijing-Dublin International College, Beijing University of Technology, Beijing 100124, China
| | - Wenting Luo
- Department of Pathophysiology, Medical College, Nanchang University, 461 Bayi Road, Nanchang 330006, China
| | - Yi Zhan
- Beijing-Dublin International College, Beijing University of Technology, Beijing 100124, China
| | - Lixia Xiong
- Department of Pathophysiology, Medical College, Nanchang University, 461 Bayi Road, Nanchang 330006, China
| | - Hongyan Zhang
- Department of Burn, The First Affiliated Hospital, Nanchang University, 17 Yongwaizheng Road, Nanschang 330066, China
| | - Hai Yan
- The Faculty of Architecture, Civil and Transportation Engineering, Beijing University of Technology, Beijing 100124, China
| |
Collapse
|
4
|
Maruyama S, Sakabe N, Ito C, Shimoyama Y, Sato S, Ikeda K. Effect of Specimen Processing Technique on Cell Detection and Classification by Artificial Intelligence. Am J Clin Pathol 2023; 159:448-454. [PMID: 36933198 DOI: 10.1093/ajcp/aqac178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 12/14/2022] [Indexed: 03/19/2023] Open
Abstract
OBJECTIVES Cytomorphology is known to differ depending on the processing technique, and these differences pose a problem for automated diagnosis using deep learning. We examined the as-yet unclarified relationship between cell detection or classification using artificial intelligence (AI) and the AutoSmear (Sakura Finetek Japan) and liquid-based cytology (LBC) processing techniques. METHODS The "You Only Look Once" (YOLO), version 5x, algorithm was trained on the AutoSmear and LBC preparations of 4 cell lines: lung cancer (LC), cervical cancer (CC), malignant pleural mesothelioma (MM), and esophageal cancer (EC). Detection and classification rates were used to evaluate the accuracy of cell detection. RESULTS When preparations of the same processing technique were used for training and detection in the 1-cell (1C) model, the AutoSmear model had a higher detection rate than the LBC model. When different processing techniques were used for training and detection, detection rates of LC and CC were significantly lower in the 4-cell (4C) model than in the 1C model, and those of MM and EC were approximately 10% lower in the 4C model. CONCLUSIONS In AI-based cell detection and classification, attention should be paid to cells whose morphologies change significantly depending on the processing technique, further suggesting the creation of a training model.
Collapse
Affiliation(s)
- Sayumi Maruyama
- Pathophysiology Sciences, Department of Integrated Health Sciences, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Nanako Sakabe
- Pathophysiology Sciences, Department of Integrated Health Sciences, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Chihiro Ito
- Pathophysiology Sciences, Department of Integrated Health Sciences, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Yuka Shimoyama
- Pathophysiology Sciences, Department of Integrated Health Sciences, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Shouichi Sato
- Clinical Engineering, Faculty of Medical Sciences, Juntendo University, Urayasu, Japan
| | - Katsuhide Ikeda
- Pathophysiology Sciences, Department of Integrated Health Sciences, Nagoya University Graduate School of Medicine, Nagoya, Japan
| |
Collapse
|
5
|
Katharina P, István K, János T. An automated neural network-based stage-specific malaria detection software using dimension reduction: The malaria microscopy classifier. MethodsX 2023; 10:102189. [PMID: 37168772 PMCID: PMC10165163 DOI: 10.1016/j.mex.2023.102189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 04/15/2023] [Indexed: 05/13/2023] Open
Abstract
Due to climate change and the COVID-19 pandemic, the number of malaria cases and deaths, caused by the Plasmodium genus, of which P. falciparum is the most common and lethal to humans, increased between 2019 and 2020. Reversing this trend and eliminating malaria worldwide requires improvements in malaria diagnosis, in which artificial intelligence (AI) has recently been demonstrated to have a great potential. One of the main reasons for the use of neural networks (NNs) is the time saving through automatising the process and the elimination of human error. When classifying with two-dimensional images of red blood cells (RBCs), the number of parameters fitted by the NN for the classification of RBCs is extremely high, which strongly influences the performance of the network, especially for training sets of moderate size. The complicated handling of malaria culturing and sample preparation does not only limit the efficiency of NNs due to small training sets, but also because of the uneven distribution of red blood cell (RBC) categories. To boost the performance of microscopy techniques in malaria diagnosis, our approach aims at resolving these drawbacks by reducing the dimension of the input data and by data augmentation, respectively. We assess the performance of our approach on images recorded by light (LM), atomic force (AFM), and fluorescence microscopy (FM). Our tool, the Malaria Stage Classifier, provides a fast, high-accuracy recognition by (1) identifying individual RBCs in multi-cell microscopy images, (2) extracting characteristic one-dimensional cross-sections from individual RBC images. These cross-sections are selected by a simple algorithm to contain key information about the status of the RBCs and are used to (3) classify the malaria blood stages. We demonstrate that our method is applicable to images recorded by various microscopy techniques and available as a software package.•Identifying individual RBCs in multi-cell microscopy images.•Extracting characteristic one-dimensional cross-sections from individual RBC images. These cross-sections are selected by a simple algorithm to contain key information about the status of the RBCs and are used to.•Classify the malaria blood stages. We demonstrate that our method is applicable to images recorded by various microscopy techniques and available as a software package.
Collapse
Affiliation(s)
- Preißinger Katharina
- Department of Applied Biotechnology and Food Sciences, BME, Budapest 1111, Hungary
- Research Center for Natural Sciences, Institute of Enzymology, Budapest 1111, Hungary
- Department of Physics, BME, Budapest 1111, Hungary
- Department of Experimental Physics V, University of Augsburg, Augsburg 86159, Germany
- Corresponding author at: Department of Experimental Physics V, University of Augsburg, Augsburg 86159, Germany
| | - Kézsmárki István
- Department of Physics, BME, Budapest 1111, Hungary
- Department of Experimental Physics V, University of Augsburg, Augsburg 86159, Germany
| | - Török János
- Department of Theoretical Physics, Institute of Physics, BME, Műegyetem rkp. 3, Budapest H-1111, Hungary
- MTA-BME Morphodynamics Research Group, BME, Budapest 1111, Hungary
| |
Collapse
|
6
|
Ghanbarzadeh-Dagheyan A, Nili VA, Ejtehadi M, Savabi R, Kavehvash Z, Ahmadian MT, Vahdat BV. Time-domain ultrasound as prior information for frequency-domain compressive ultrasound for intravascular cell detection: A 2-cell numerical model. Ultrasonics 2022; 125:106791. [PMID: 35809517 DOI: 10.1016/j.ultras.2022.106791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 05/05/2022] [Accepted: 06/08/2022] [Indexed: 06/15/2023]
Abstract
This study proposes a new method for the detection of a weak scatterer among strong scatterers using prior-information ultrasound (US) imaging. A perfect application of this approach is in vivo cell detection in the bloodstream, where red blood cells (RBCs) serve as identifiable strong scatterers. In vivo cell detection can help diagnose cancer at its earliest stages, increasing the chances of survival for patients. This work combines time-domain US with frequency-domain compressive US imaging to detect a 20-μ MCF-7 circulating tumor cell (CTC) among a number of RBCs within a simulated venule inside the mouth. The 2D image reconstructed from the time-domain US is employed to simulate the reflected and scattered pressure field from the RBCs, which is then measured at the location of the receivers. The RBCs are tagged one time by a human operator and another time, automatically, by template-based computer vision. Next, the resulting signal from the RBCs is subtracted from the measured total signal in frequency domain to generate the scattered-field data, coming from the CTC alone. Feeding that signal and the background pressure field into a norm-one-based compressive sensing code enables detecting the CTC at various locations. As errors could arise in determining the location of the RBCs and their acoustic properties in the real world, small errors (up to 10% in the former and 5% in the latter) are purposefully introduced to the model, to which the proposed method is shown to be resilient. Localization errors are smaller than 12 μ when a human tags the RBCs and smaller than 25 μ when computer vision is applied. Despite its limitations, this study, for the first time, reports the results of combining two US modalities aimed at cell detection and introduces a unique and useful application for ultrahigh-frequency US imaging. It should be noted that this method can be used in detecting weak scatterers with ultrasound waves in other applications as well.
Collapse
Affiliation(s)
- Ashkan Ghanbarzadeh-Dagheyan
- Department of Mechanical Engineering, Sharif University of Technology, Tehran, Iran; Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran.
| | - Vahid Amin Nili
- Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran
| | - Mehdi Ejtehadi
- Department of Mechanical Engineering, Sharif University of Technology, Tehran, Iran
| | - Reza Savabi
- School of Mechanical Engineering, University of Tehran, Tehran, Iran
| | - Zahra Kavehvash
- Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran
| | | | | |
Collapse
|
7
|
Ghaznavi A, Rychtáriková R, Saberioon M, Štys D. Cell segmentation from telecentric bright-field transmitted light microscopy images using a Residual Attention U-Net: A case study on HeLa line. Comput Biol Med 2022; 147:105805. [PMID: 35809410 DOI: 10.1016/j.compbiomed.2022.105805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 06/03/2022] [Accepted: 06/26/2022] [Indexed: 11/20/2022]
Abstract
Living cell segmentation from bright-field light microscopy images is challenging due to the image complexity and temporal changes in the living cells. Recently developed deep learning (DL)-based methods became popular in medical and microscopy image segmentation tasks due to their success and promising outcomes. The main objective of this paper is to develop a deep learning, U-Net-based method to segment the living cells of the HeLa line in bright-field transmitted light microscopy. To find the most suitable architecture for our datasets, a residual attention U-Net was proposed and compared with an attention and a simple U-Net architecture. The attention mechanism highlights the remarkable features and suppresses activations in the irrelevant image regions. The residual mechanism overcomes with vanishing gradient problem. The Mean-IoU score for our datasets reaches 0.9505, 0.9524, and 0.9530 for the simple, attention, and residual attention U-Net, respectively. The most accurate semantic segmentation results was achieved in the Mean-IoU and Dice metrics by applying the residual and attention mechanisms together. The watershed method applied to this best - Residual Attention - semantic segmentation result gave the segmentation with the specific information for each cell.
Collapse
Affiliation(s)
- Ali Ghaznavi
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Institute of Complex Systems, University of South Bohemia in České Budějovice, Zámek 136, 373 33, Nové Hrady, Czech Republic.
| | - Renata Rychtáriková
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Institute of Complex Systems, University of South Bohemia in České Budějovice, Zámek 136, 373 33, Nové Hrady, Czech Republic.
| | - Mohammadmehdi Saberioon
- Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing and Geoinformatics, Telegrafenberg, Potsdam 14473, Germany.
| | - Dalibor Štys
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Institute of Complex Systems, University of South Bohemia in České Budějovice, Zámek 136, 373 33, Nové Hrady, Czech Republic.
| |
Collapse
|
8
|
Bakhshpour M, Piskin AK, Yavuz H, Denizli A. Preparation of Notch-4 Receptor Containing Quartz Crystal Microbalance Biosensor for MDA MB 231 Cancer Cell Detection. Methods Mol Biol 2022; 2393:515-533. [PMID: 34837197 DOI: 10.1007/978-1-0716-1803-5_27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Quartz crystal microbalance (QCM) is a highly sensitive system that is used as a biosensor for biomolecules and cells. Detection and characterization of cancer cells in circulation or biopsy samples is of crucial importance for cancer diagnosis. Here, we introduce approaches for breast cancer cell detection via their surface molecules. The sensor system is based on preliminary coating of QCM chip with polymeric nanoparticles to increase the surface area and allow for the attachment of proteins to the chip surface. This is followed by the attachment of a specific protein in order to functionalize the chip. Breast cancer cells and fibroblast cells as control are cultured and applied to this chip. The functionalized QCM system can detect breast cancer cells with high affinity and selectivity. Here, we present the preparation methods of QCM-based sensors for selective detection of MDA MB 231 cancer cells. Selectivity of QCM-based sensor is carried out in the presence of L929 mouse fibroblast cells.
Collapse
Affiliation(s)
| | - Ayse Kevser Piskin
- Faculty of Medicine, Medical Biochemistry Department, Hacettepe University, Ankara, Turkey
| | - Handan Yavuz
- Department of Chemistry, Hacettepe University, Ankara, Turkey
| | - Adil Denizli
- Department of Chemistry, Hacettepe University, Ankara, Turkey.
| |
Collapse
|
9
|
Aonishi T, Maruyama R, Ito T, Miyakawa H, Murayama M, Ota K. Imaging data analysis using non-negative matrix factorization. Neurosci Res 2021:S0168-0102(21)00247-9. [PMID: 34953961 DOI: 10.1016/j.neures.2021.12.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 09/24/2021] [Accepted: 12/12/2021] [Indexed: 11/22/2022]
Abstract
The rapid progress of imaging devices such as two-photon microscopes has made it possible to measure the activity of thousands to tens of thousands of cells at single-cell resolution in a wide field of view (FOV) data. However, it is not possible to manually identify thousands of cells in such wide FOV data. Several research groups have developed machine learning methods for automatically detecting cells from wide FOV data. Many of the recently proposed methods using dynamic activity information rather than static morphological information are based on non-negative matrix factorization (NMF). In this review, we outline cell-detection methods related to NMF. For the purpose of raising issues on NMF cell detection, we introduce our current development of a non-NMF method that is capable of detecting about 17,000 cells in ultra-wide FOV data.
Collapse
|
10
|
Uka A, Ndreu Halili A, Polisi X, Topal AO, Imeraj G, Vrana NE. Basis of Image Analysis for Evaluating Cell Biomaterial Interaction Using Brightfield Microscopy. Cells Tissues Organs 2021; 210:77-104. [PMID: 34186537 DOI: 10.1159/000512969] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Accepted: 11/10/2020] [Indexed: 11/19/2022] Open
Abstract
Medical imaging is a growing field that has stemmed from the need to conduct noninvasive diagnosis, monitoring, and analysis of biological systems. With the developments and advances in the medical field and the new techniques that are used in the intervention of diseases, very soon the prevalence of implanted biomedical devices will be even more significant. The implanted materials in a biological system are used in diverse fields, which require lengthy evaluation and validation processes. However, currently the evaluation of the toxicity of biomaterials has not been fully automated yet. Moreover, image analysis is an integral part of biomaterial research, but it is not within the core capacities of a significant portion of biomaterial scientists, which results in the use of predominantly ready-made tools. The detailed image analysis can be conducted once all the relevant parameters including the inherent characteristics of image acquisition techniques are considered. Herein, we cover the currently used image analysis-based techniques for assessment of biomaterial/cell interaction with a specific focus on unstained brightfield microscopy acquired mostly in but not limited to microfluidic systems, which serve as multiparametric sensing platforms for noninvasive experimental measurements. We present the major imaging acquisition techniques that enable point-of-care testing when incorporated with microfluidic cells, discuss the constraints enforced by the geometry of the system and the material that is analyzed, and the challenges that rise in the image analysis when unstained cell imaging is employed. Emerging techniques such as utilization of machine learning and cell-specific pattern recognition algorithms and potential future directions are discussed. Automation and optimization of biomaterial assessment can facilitate the discovery of novel biomaterials together with making the validation of biomedical innovations cheaper and faster.
Collapse
Affiliation(s)
- Arban Uka
- Department of Computer Engineering, Epoka University, Tiranë, Albania
| | - Albana Ndreu Halili
- Department of Computer Engineering, Epoka University, Tiranë, Albania.,Department of Information Technology, Aleksandër Moisiu University, Durrës, Albania
| | - Xhoena Polisi
- Department of Computer Engineering, Epoka University, Tiranë, Albania
| | - Ali O Topal
- Department of Computer Engineering, Epoka University, Tiranë, Albania
| | - Gent Imeraj
- Department of Computer Engineering, Epoka University, Tiranë, Albania
| | - Nihal E Vrana
- Spartha Medical, Strasbourg, France.,INSERM UMR 1121, Strasbourg, France
| |
Collapse
|
11
|
Sun Y, Huang X, Zhou H, Zhang Q. SRPN: similarity-based region proposal networks for nuclei and cells detection in histology images. Med Image Anal 2021; 72:102142. [PMID: 34198042 DOI: 10.1016/j.media.2021.102142] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 05/11/2021] [Accepted: 06/17/2021] [Indexed: 10/21/2022]
Abstract
The detection of nuclei and cells in histology images is of great value in both clinical practice and pathological studies. However, multiple reasons such as morphological variations of nuclei or cells make it a challenging task where conventional object detection methods cannot obtain satisfactory performance in many cases. A detection task consists of two sub-tasks, classification and localization. Under the condition of dense object detection, classification is a key to boost the detection performance. Considering this, we propose similarity based region proposal networks (SRPN) for nuclei and cells detection in histology images. In particular, a customised convolution layer termed as embedding layer is designed for network building. The embedding layer is added into the region proposal networks, enabling the networks to learn discriminative features based on similarity learning. Features obtained by similarity learning can significantly boost the classification performance compared to conventional methods. SRPN can be easily integrated into standard convolutional neural networks architectures such as the Faster R-CNN and RetinaNet. We test the proposed approach on tasks of multi-organ nuclei detection and signet ring cells detection in histological images. Experimental results show that networks applying similarity learning achieved superior performance on both tasks when compared to their counterparts. In particular, the proposed SRPN achieve state-of-the-art performance on the MoNuSeg benchmark for nuclei segmentation and detection while compared to previous methods, and on the signet ring cell detection benchmark when compared with baselines. The sourcecode is publicly available at: https://github.com/sigma10010/nuclei_cells_det.
Collapse
Affiliation(s)
- Yibao Sun
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom
| | - Xingru Huang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom.
| | - Huiyu Zhou
- School of Informatics, University of Leicester, University Road, Leicester, LE1 7RH, United Kingdom
| | - Qianni Zhang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom
| |
Collapse
|
12
|
Gregório da Silva BC, Tam R, Ferrari RJ. Detecting cells in intravital video microscopy using a deep convolutional neural network. Comput Biol Med 2020; 129:104133. [PMID: 33285356 DOI: 10.1016/j.compbiomed.2020.104133] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 11/15/2020] [Accepted: 11/16/2020] [Indexed: 11/20/2022]
Abstract
The analysis of leukocyte recruitment in intravital video microscopy (IVM) is essential to the understanding of inflammatory processes. However, because IVM images often present a large variety of visual characteristics, it is hard for an expert human or even conventional machine learning techniques to detect and count the massive amount of cells and extract statistical measures precisely. Convolutional neural networks are a promising approach to overcome this problem, but due to the difficulty of labeling cells, large data sets with ground truth are rare. The present work explores an adaptation of the RetinaNet model with a suite of augmentation techniques and transfer learning for detecting leukocytes in IVM data. The augmentation techniques include simulating the Airy pattern and motion artifacts present in microscopy imaging, followed by traditional photometric, geometric and smooth elastic transformations to reproduce color and shape changes in cells. In addition, we analyzed the use of different network backbones, feature pyramid levels, and image input scales. We have found that even with limited data, our strategy not only enables training without overfitting but also boosts generalization performance. Among several experiments, the model reached a value of 94.84 for the average precision (AP) metric as our best outcome when using data from different image modalities. We also compared our results with conventional image processing techniques and open-source tools. The results showed an outstanding precision of the method compared with other approaches, presenting low error rates for cell counting and centroid distances. Code is available at: https://github.com/brunoggregorio/retinanet-cell-detection.
Collapse
Affiliation(s)
- Bruno C Gregório da Silva
- Departamento de Computação, Universidade Federal de São Carlos, Washington Luís Rd., Km 235, 13.565-905, São Carlos, SP, Brazil.
| | - Roger Tam
- Department of Radiology, School of Biomedical Engineering, University of British Columbia, Djavad Mowafaghian Centre for Brain Health, 2215 Wesbrook Mall, V6T 2B5, Vancouver, Canada.
| | - Ricardo J Ferrari
- Departamento de Computação, Universidade Federal de São Carlos, Washington Luís Rd., Km 235, 13.565-905, São Carlos, SP, Brazil.
| |
Collapse
|
13
|
Abstract
In medical imaging, CycleGAN has been used for various image generation tasks, including image synthesis, image denoising, and data augmentation. However, when pushing the technical limits of medical imaging, there can be a substantial variation in image quality. Here, we demonstrate that images generated by CycleGAN can be improved through explicit grading of image quality, which we call stratified CycleGAN. In this image generation task, CycleGAN is used to upgrade the image quality and content of near-infrared fluorescent (NIRF) retinal images. After manual assignment of grading scores to a small subset of the data, semi-supervised learning is applied to propagate grades across the remainder of the data and set up the training data. These scores are embedded into the CycleGAN by adding the grading score as a conditional input to the generator and by integrating an image quality classifier into the discriminator. We validate the efficacy of the proposed stratified CycleGAN by considering pairs of NIRF images at the same retinal regions (imaged with and without correction of optical aberrations achieved using adaptive optics), with the goal being to restore image quality in aberrated images such that cellular-level detail can be obtained. Overall, stratified CycleGAN generated higher quality synthetic images than traditional CycleGAN. Evaluation of cell detection accuracy confirmed that synthetic images were faithful to ground truth images of the same cells. Across this challenging dataset, F1-score improved from 76.9 ± 5.7% when using traditional CycleGAN to 85.0±3.4% when using stratified CycleGAN. These findings demonstrate the potential of stratified Cycle-GAN to improve the synthesis of medical images that exhibit a graded variation in image quality.
Collapse
|
14
|
Abstract
An important challenge in pre-processing data from droplet-based single-cell RNA sequencing protocols is distinguishing barcodes associated with real cells from those binding background reads. Existing methods test barcodes individually and consequently do not leverage the strong cell-to-cell correlation present in most datasets. To improve cell detection, we introduce CB2, a cluster-based approach for distinguishing real cells from background barcodes. As demonstrated in simulated and case study datasets, CB2 has increased power for identifying real cells which allows for the identification of novel subpopulations and improves the precision of downstream analyses.
Collapse
Affiliation(s)
- Zijian Ni
- Department of Statistics, University of Wisconsin-Madison, Madison, WI USA
| | - Shuyang Chen
- Department of Statistics, University of Wisconsin-Madison, Madison, WI USA
| | - Jared Brown
- Department of Statistics, University of Wisconsin-Madison, Madison, WI USA
| | - Christina Kendziorski
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI USA
| |
Collapse
|
15
|
Koyuncu CF, Gunesli GN, Cetin-Atalay R, Gunduz-Demir C. DeepDistance: A multi-task deep regression model for cell detection in inverted microscopy images. Med Image Anal 2020; 63:101720. [PMID: 32438298 DOI: 10.1016/j.media.2020.101720] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Revised: 02/28/2020] [Accepted: 05/04/2020] [Indexed: 11/25/2022]
Abstract
This paper presents a new deep regression model, which we call DeepDistance, for cell detection in images acquired with inverted microscopy. This model considers cell detection as a task of finding most probable locations that suggest cell centers in an image. It represents this main task with a regression task of learning an inner distance metric. However, different than the previously reported regression based methods, the DeepDistance model proposes to approach its learning as a multi-task regression problem where multiple tasks are learned by using shared feature representations. To this end, it defines a secondary metric, normalized outer distance, to represent a different aspect of the problem and proposes to define its learning as complementary to the main cell detection task. In order to learn these two complementary tasks more effectively, the DeepDistance model designs a fully convolutional network (FCN) with a shared encoder path and end-to-end trains this FCN to concurrently learn the tasks in parallel. For further performance improvement on the main task, this paper also presents an extended version of the DeepDistance model that includes an auxiliary classification task and learns it in parallel to the two regression tasks by also sharing feature representations with them. DeepDistance uses the inner distances estimated by these FCNs in a detection algorithm to locate individual cells in a given image. In addition to this detection algorithm, this paper also suggests a cell segmentation algorithm that employs the estimated maps to find cell boundaries. Our experiments on three different human cell lines reveal that the proposed multi-task learning models, the DeepDistance model and its extended version, successfully identify the locations of cell as well as delineate their boundaries, even for the cell line that was not used in training, and improve the results of its counterparts.
Collapse
Affiliation(s)
| | - Gozde Nur Gunesli
- Department of Computer Engineering, Bilkent University, Ankara TR-06800, Turkey.
| | - Rengul Cetin-Atalay
- CanSyL,Graduate School of Informatics, Middle East Technical University, Ankara TR-06800, Turkey.
| | - Cigdem Gunduz-Demir
- Department of Computer Engineering, Bilkent University, Ankara TR-06800, Turkey; Neuroscience Graduate Program, Bilkent University, Ankara TR-06800, Turkey.
| |
Collapse
|
16
|
Cui L, Li H, Hui W, Chen S, Yang L, Kang Y, Bo Q, Feng J. A deep learning-based framework for lung cancer survival analysis with biomarker interpretation. BMC Bioinformatics 2020; 21:112. [PMID: 32183709 PMCID: PMC7079513 DOI: 10.1186/s12859-020-3431-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 02/25/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Lung cancer is the leading cause of cancer-related deaths in both men and women in the United States, and it has a much lower five-year survival rate than many other cancers. Accurate survival analysis is urgently needed for better disease diagnosis and treatment management. RESULTS In this work, we propose a survival analysis system that takes advantage of recently emerging deep learning techniques. The proposed system consists of three major components. 1) The first component is an end-to-end cellular feature learning module using a deep neural network with global average pooling. The learned cellular representations encode high-level biologically relevant information without requiring individual cell segmentation, which is aggregated into patient-level feature vectors by using a locality-constrained linear coding (LLC)-based bag of words (BoW) encoding algorithm. 2) The second component is a Cox proportional hazards model with an elastic net penalty for robust feature selection and survival analysis. 3) The third commponent is a biomarker interpretation module that can help localize the image regions that contribute to the survival model's decision. Extensive experiments show that the proposed survival model has excellent predictive power for a public (i.e., The Cancer Genome Atlas) lung cancer dataset in terms of two commonly used metrics: log-rank test (p-value) of the Kaplan-Meier estimate and concordance index (c-index). CONCLUSIONS In this work, we have proposed a segmentation-free survival analysis system that takes advantage of the recently emerging deep learning framework and well-studied survival analysis methods such as the Cox proportional hazards model. In addition, we provide an approach to visualize the discovered biomarkers, which can serve as concrete evidence supporting the survival model's decision.
Collapse
Affiliation(s)
- Lei Cui
- Department of Information Science and Technology, Northwest University, Xi’an, China
| | - Hansheng Li
- Department of Information Science and Technology, Northwest University, Xi’an, China
| | - Wenli Hui
- The College of Life Sciences, Northwest University, Xi’an, China
| | - Sitong Chen
- The College of Life Sciences, Northwest University, Xi’an, China
| | - Lin Yang
- The College of Life Sciences, Northwest University, Xi’an, China
| | - Yuxin Kang
- Department of Information Science and Technology, Northwest University, Xi’an, China
| | - Qirong Bo
- Department of Information Science and Technology, Northwest University, Xi’an, China
| | - Jun Feng
- Department of Information Science and Technology, Northwest University, Xi’an, China
| |
Collapse
|
17
|
Wuttisarnwattana P, Eid S, Gargesha M, Cooke KR, Wilson DL. Cryo-imaging of Stem Cell Biodistribution in Mouse Model of Graft-Versus-Host-Disease. Ann Biomed Eng 2020; 48:1702-1711. [PMID: 32103369 DOI: 10.1007/s10439-020-02487-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Accepted: 02/21/2020] [Indexed: 12/13/2022]
Abstract
We demonstrated the use of multispectral cryo-imaging and software to analyze human mesenchymal stromal cells (hMSCs) biodistribution in mouse models of graft-versus-host-disease (GVHD) following allogeneic bone marrow transplantation (BMT). We injected quantum dot labeled MSCs via tail vein to mice receiving BMT and analyzed hMSC biodistribution in major organs (e.g. lung, liver, spleen, kidneys and bone marrow). We compared the biodistribution of hMSCs in mice following allogeneic BMT recipients (with GVHD) to the biodistribution following syngeneic BMT (without GVHD). Cryo-imaging system revealed cellular biodistribution and redistribution patterns in the animal model. We initially found clusters of cells in the lung that eventually dissociated to single cells and redistributed to other organs within 72 h. The in vivo half-life of the exogenous MSCs was about 21 h. We found that the biodistribution of stromal cells was not related to blood flow, rather cells preferentially homed to specific organs. In conclusion, cryo-imaging was suitable for analyzing the cellular biodistribution. It could provide capabilities of visualizing cells anywhere in the mouse model with single cell sensitivity. By characterizing the biodistribution and anatomical specificity of a therapeutic cellular product, we believe that cryo-imaging can play an important role in the advancement of stem and stromal cell therapies and regenerative medicine.
Collapse
Affiliation(s)
- Patiwet Wuttisarnwattana
- Department of Computer Engineering, Faculty of Engineering, Chiang Mai University, Chiang Mai, 50200, Thailand. .,Biomedical Engineering Institute, Chiang Mai University, Chiang Mai, Thailand.
| | - Saada Eid
- Department of Pediatric Hematology and Oncology, Case Western Reserve University, Cleveland, OH, USA
| | | | - Kenneth R Cooke
- Department of Oncology, The Sidney Kimmel Comprehensive Cancer Center, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - David L Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| |
Collapse
|
18
|
Liu J, Shen C, Liu T, Aguilera N, Tam J. Active Appearance Model Induced Generative Adversarial Network for Controlled Data Augmentation. Med Image Comput Comput Assist Interv 2019; 11764:201-208. [PMID: 31696163 PMCID: PMC6834374 DOI: 10.1007/978-3-030-32239-7_23] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Data augmentation is an important strategy for enlarging training datasets in deep learning-based medical image analysis. This is because large, annotated medical datasets are not only difficult and costly to generate, but also quickly become obsolete due to rapid advances in imaging technology. Image-to-image conditional generative adversarial networks (C-GAN) provide a potential solution for data augmentation. However, annotations used as inputs to C-GAN are typically based only on shape information, which can result in undesirable intensity distributions in the resulting artificially-created images. In this paper, we introduce an active cell appearance model (ACAM) that can measure statistical distributions of shape and intensity and use this ACAM model to guide C-GAN to generate more realistic images, which we call A-GAN. A-GAN provides an effective means for conveying anisotropic intensity information to C-GAN. A-GAN incorporates a statistical model (ACAM) to determine how transformations are applied for data augmentation. Traditional approaches for data augmentation that are based on arbitrary transformations might lead to unrealistic shape variations in an augmented dataset that are not representative of real data. A-GAN is designed to ameliorate this. To validate the effectiveness of using A-GAN for data augmentation, we assessed its performance on cell analysis in adaptive optics retinal imaging, which is a rapidly-changing medical imaging modality. Compared to C-GAN, A-GAN achieved stability in fewer iterations. The cell detection and segmentation accuracy when assisted by A-GAN augmentation was higher than that achieved with C-GAN. These findings demonstrate the potential for A-GAN to substantially improve existing data augmentation methods in medical image analysis.
Collapse
Affiliation(s)
- Jianfei Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Christine Shen
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Tao Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Nancy Aguilera
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Johnny Tam
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
19
|
Lun ATL, Riesenfeld S, Andrews T, Dao TP, Gomes T, Marioni JC. EmptyDrops: distinguishing cells from empty droplets in droplet-based single-cell RNA sequencing data. Genome Biol 2019; 20:63. [PMID: 30902100 PMCID: PMC6431044 DOI: 10.1186/s13059-019-1662-y] [Citation(s) in RCA: 418] [Impact Index Per Article: 83.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Accepted: 02/26/2019] [Indexed: 02/07/2023] Open
Abstract
Droplet-based single-cell RNA sequencing protocols have dramatically increased the throughput of single-cell transcriptomics studies. A key computational challenge when processing these data is to distinguish libraries for real cells from empty droplets. Here, we describe a new statistical method for calling cells from droplet-based data, based on detecting significant deviations from the expression profile of the ambient solution. Using simulations, we demonstrate that EmptyDrops has greater power than existing approaches while controlling the false discovery rate among detected cells. Our method also retains distinct cell types that would have been discarded by existing methods in several real data sets.
Collapse
Affiliation(s)
- Aaron T L Lun
- Cancer Research UK Cambridge Institute, University of Cambridge, Li Ka Shing Centre, Robinson Way, Cambridge, UK.
| | - Samantha Riesenfeld
- Klarman Cell Observatory, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Tallulah Andrews
- Wellcome Trust Sanger Institute, Wellcome Genome Campus, Hinxton, Cambridge, UK
| | - The Phuong Dao
- Program for Computational and Systems Biology, Sloan Kettering Institute, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Tomas Gomes
- Wellcome Trust Sanger Institute, Wellcome Genome Campus, Hinxton, Cambridge, UK
| | - John C Marioni
- Cancer Research UK Cambridge Institute, University of Cambridge, Li Ka Shing Centre, Robinson Way, Cambridge, UK.
- Wellcome Trust Sanger Institute, Wellcome Genome Campus, Hinxton, Cambridge, UK.
- EMBL European Bioinformatics Institute, Wellcome Genome Campus, Hinxton, Cambridge, UK.
| |
Collapse
|
20
|
Liimatainen K, Kananen L, Latonen L, Ruusuvuori P. Iterative unsupervised domain adaptation for generalized cell detection from brightfield z-stacks. BMC Bioinformatics 2019; 20:80. [PMID: 30767778 PMCID: PMC6376647 DOI: 10.1186/s12859-019-2605-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Accepted: 01/04/2019] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Cell counting from cell cultures is required in multiple biological and biomedical research applications. Especially, accurate brightfield-based cell counting methods are needed for cell growth analysis. With deep learning, cells can be detected with high accuracy, but manually annotated training data is required. We propose a method for cell detection that requires annotated training data for one cell line only, and generalizes to other, unseen cell lines. RESULTS Training a deep learning model with one cell line only can provide accurate detections for similar unseen cell lines (domains). However, if the new domain is very dissimilar from training domain, high precision but lower recall is achieved. Generalization capabilities of the model can be improved with training data transformations, but only to a certain degree. To further improve the detection accuracy of unseen domains, we propose iterative unsupervised domain adaptation method. Predictions of unseen cell lines with high precision enable automatic generation of training data, which is used to train the model together with parts of the previously used annotated training data. We used U-Net-based model, and three consecutive focal planes from brightfield image z-stacks. We trained the model initially with PC-3 cell line, and used LNCaP, BT-474 and 22Rv1 cell lines as target domains for domain adaptation. Highest improvement in accuracy was achieved for 22Rv1 cells. F1-score after supervised training was only 0.65, but after unsupervised domain adaptation we achieved a score of 0.84. Mean accuracy for target domains was 0.87, with mean improvement of 16 percent. CONCLUSIONS With our method for generalized cell detection, we can train a model that accurately detects different cell lines from brightfield images. A new cell line can be introduced to the model without a single manual annotation, and after iterative domain adaptation the model is ready to detect these cells with high accuracy.
Collapse
Affiliation(s)
- Kaisa Liimatainen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Lauri Kananen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Leena Latonen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Pekka Ruusuvuori
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| |
Collapse
|
21
|
Ata MM, Ashour AS, Guo Y, Elnaby MMA. Centroid tracking and velocity measurement of white blood cell in video. Health Inf Sci Syst 2018; 6:20. [PMID: 30425827 DOI: 10.1007/s13755-018-0060-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Accepted: 10/16/2018] [Indexed: 11/25/2022] Open
Abstract
Automated blood cells tracking system has a vital role as the tracking process reflects the blood cell characteristics and indicates several diseases. Blood cells tracking is challenging due to the non-rigid shapes of the blood cells, and the variability in their videos along with the existence of different moving objects in the blood. To tackle such challenges, we proposed a green star based centroid (GSBC) moving white blood cell (WBC) tracking algorithm to measure its velocity and draw its trajectory. The proposed cell tracking system consists of two stages, namely WBC detection and blob analysis, and fine tuning the tracking process by determine the centroid of the WBC, and mark the centroid for further fine tracking and to exclude the bacteria from the bounding box. Furthermore, the speed and the trajectory of the WBC motion are recorded and plotted. In the experiments, an optical flow technique is compared with the proposed tracking system showing the superiority of the proposed system as the optical flow method failed to track the WBC. The proposed system identified the WBC accurately, while the optical flow identified all other objects lead to its disability to track the WBC.
Collapse
Affiliation(s)
- Mohamed Maher Ata
- Misr Higher Institute of Engineering and Technology, Mansoura, Egypt
| | - Amira S Ashour
- 2Department of Electronics and Electrical Communications Engineering, Faculty of Engineering, Tanta University, Tanta, Egypt
| | - Yanhui Guo
- 3Department of Computer Science, University of Illinois at Springfield, Springfield, IL USA
| | - Mustafa M Abd Elnaby
- 2Department of Electronics and Electrical Communications Engineering, Faculty of Engineering, Tanta University, Tanta, Egypt
| |
Collapse
|
22
|
Fathi F, Jalili R, Amjadi M, Rashidi MR. SPR signals enhancement by gold nanorods for cell surface marker detection. ACTA ACUST UNITED AC 2018; 9:71-78. [PMID: 31334038 PMCID: PMC6637213 DOI: 10.15171/bi.2019.10] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2018] [Revised: 10/06/2018] [Accepted: 10/07/2018] [Indexed: 12/26/2022]
Abstract
![]()
Introduction:
The detection of micrometer-sized particles like cells is limited by surface plasmon resonance (SPR) biosensors because of having a depth of evanescent wave <500 nm. In this study, for the first time, we exhibited the use of streptavidin-functionalized gold nanorods (GNRs) as intensification labels for detection of cell surface markers in SPR-based biosensors.
Methods: The GNRs (ʎ max: 735 nm) were modified with streptavidin using EDC/NHS coupling method and human umbilical vein endothelial cells (HUVECs) were selected as the cell model for detecting VE-cadherin on cell surface using real-time SPR device in the 785 nm wavelength of the laser source.
Results: The investigations revealed that the plasmonic field extension produced from the gold layer and GNRs resulted in multiple enhancement of SPR signals when the wavelength of laser source in SPR instrument was matched with the wavelength of maximum absorbance in GNRs. Moreover, the results showed that the growth of ∆RU value in specific and non-specific bindings for various cell number injections were produced with increasing the cell number.
Conclusion: The results displayed that cell detection can be performed in real- time form without any need to a time-consuming process used in conventional methods like immunocytochemistry, flow cytometry, and western blotting.
Collapse
Affiliation(s)
- Farzaneh Fathi
- Research Center for Pharmaceutical Nanotechnology, Tabriz University of Medical Sciences, Tabriz, Iran.,Student Research Committee, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Roghayeh Jalili
- Department of Analytical Chemistry, Faculty of Chemistry, University of Tabriz, Tabriz, Iran
| | - Mohammad Amjadi
- Department of Analytical Chemistry, Faculty of Chemistry, University of Tabriz, Tabriz, Iran
| | - Mohammad-Reza Rashidi
- Research Center for Pharmaceutical Nanotechnology, Tabriz University of Medical Sciences, Tabriz, Iran.,Faculty of Pharmacy, Tabriz University of Medical Sciences, Tabriz, Iran
| |
Collapse
|
23
|
Xie Y, Xing F, Shi X, Kong X, Su H, Yang L. Efficient and robust cell detection: A structured regression approach. Med Image Anal 2018; 44:245-254. [PMID: 28797548 PMCID: PMC6051760 DOI: 10.1016/j.media.2017.07.003] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Revised: 02/22/2017] [Accepted: 07/21/2017] [Indexed: 10/19/2022]
Abstract
Efficient and robust cell detection serves as a critical prerequisite for many subsequent biomedical image analysis methods and computer-aided diagnosis (CAD). It remains a challenging task due to touching cells, inhomogeneous background noise, and large variations in cell sizes and shapes. In addition, the ever-increasing amount of available datasets and the high resolution of whole-slice scanned images pose a further demand for efficient processing algorithms. In this paper, we present a novel structured regression model based on a proposed fully residual convolutional neural network for efficient cell detection. For each testing image, our model learns to produce a dense proximity map that exhibits higher responses at locations near cell centers. Our method only requires a few training images with weak annotations (just one dot indicating the cell centroids). We have extensively evaluated our method using four different datasets, covering different microscopy staining methods (e.g., H & E or Ki-67 staining) or image acquisition techniques (e.g., bright-filed image or phase contrast). Experimental results demonstrate the superiority of our method over existing state of the art methods in terms of both detection accuracy and running time.
Collapse
Affiliation(s)
- Yuanpu Xie
- Department of Biomedical Engineering, University of Florida, FL 32611 USA.
| | - Fuyong Xing
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Xiaoshuang Shi
- Department of Biomedical Engineering, University of Florida, FL 32611 USA
| | - Xiangfei Kong
- School of Electrical and Electronic Engineering, Nanyang Technological University, Nanyang Drive 637553 Singapore
| | - Hai Su
- Department of Biomedical Engineering, University of Florida, FL 32611 USA
| | - Lin Yang
- Department of Biomedical Engineering, University of Florida, FL 32611 USA; Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32611, USA.
| |
Collapse
|
24
|
Guan J, Li J, Liang S, Li R, Li X, Shi X, Huang C, Zhang J, Pan J, Jia H, Zhang L, Chen X, Liao X. NeuroSeg: automated cell detection and segmentation for in vivo two-photon Ca 2+ imaging data. Brain Struct Funct 2017; 223:519-533. [PMID: 29124351 DOI: 10.1007/s00429-017-1545-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2017] [Accepted: 10/15/2017] [Indexed: 11/28/2022]
Abstract
Two-photon Ca2+ imaging has become a popular approach for monitoring neuronal population activity with cellular or subcellular resolution in vivo. This approach allows for the recording of hundreds to thousands of neurons per animal and thus leads to a large amount of data to be processed. In particular, manually drawing regions of interest is the most time-consuming aspect of data analysis. However, the development of automated image analysis pipelines, which will be essential for dealing with the likely future deluge of imaging data, remains a major challenge. To address this issue, we developed NeuroSeg, an open-source MATLAB program that can facilitate the accurate and efficient segmentation of neurons in two-photon Ca2+ imaging data. We proposed an approach using a generalized Laplacian of Gaussian filter to detect cells and weighting-based segmentation to separate individual cells from the background. We tested this approach on an in vivo two-photon Ca2+ imaging dataset obtained from mouse cortical neurons with differently sized view fields. We show that this approach exhibits superior performance for cell detection and segmentation compared with the existing published tools. In addition, we integrated the previously reported, activity-based segmentation into our approach and found that this combined method was even more promising. The NeuroSeg software, including source code and graphical user interface, is freely available and will be a useful tool for in vivo brain activity mapping.
Collapse
Affiliation(s)
- Jiangheng Guan
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Jingcheng Li
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Shanshan Liang
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Ruijie Li
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Xingyi Li
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Xiaozhe Shi
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China.,School of Life Sciences, Peking University, Beijing, 100871, China
| | - Ciyu Huang
- College of Computer and Information Science and College of Software, Southwest University, Chongqing, 400715, China
| | - Jianxiong Zhang
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Junxia Pan
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Hongbo Jia
- Brain Research Instrument Innovation Center, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, Jiangsu, China
| | - Le Zhang
- College of Computer and Information Science and College of Software, Southwest University, Chongqing, 400715, China
| | - Xiaowei Chen
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China. .,CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031, China.
| | - Xiang Liao
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China.
| |
Collapse
|
25
|
Chen GY, Li Z, Duarte JN, Esteban A, Cheloha RW, Theile CS, Fink GR, Ploegh HL. Rapid capture and labeling of cells on single domain antibodies-functionalized flow cell. Biosens Bioelectron 2016; 89:789-794. [PMID: 27816596 DOI: 10.1016/j.bios.2016.10.015] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Revised: 10/01/2016] [Accepted: 10/05/2016] [Indexed: 01/13/2023]
Abstract
Current techniques to characterize leukocyte subgroups in blood require long sample preparation times and sizable sample volumes. A simplified method for leukocyte characterization using smaller blood volumes would thus be useful in diagnostic settings. Here we describe a flow system comprised of two functionalized graphene oxide (GO) surfaces that allow the capture of distinct leukocyte populations from small volumes blood using camelid single-domain antibodyfragments (VHHs) as capture agents. We used site-specifically labeled leukocytes to detect and identify cells exposed to fungal challenge. Combining the chemical and optical properties of GO with the versatility of the VHH scaffold in the context of a flow system provides a quick and efficient method for the capture and characterization of functional leukocytes.
Collapse
Affiliation(s)
- Guan-Yu Chen
- Whitehead Institute for Biomedical Research, Cambridge, MA, USA; Institute of Biomedical Engineering, National Chiao Tung University, Hsinchu 30010, Taiwan; Department of Biological Science and Technology, National Chiao Tung University, Hsinchu 30010, Taiwan
| | - Zeyang Li
- Whitehead Institute for Biomedical Research, Cambridge, MA, USA
| | - Joao N Duarte
- Whitehead Institute for Biomedical Research, Cambridge, MA, USA
| | | | - Ross W Cheloha
- Whitehead Institute for Biomedical Research, Cambridge, MA, USA
| | | | - Gerald R Fink
- Whitehead Institute for Biomedical Research, Cambridge, MA, USA; Department of Biology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Hidde L Ploegh
- Whitehead Institute for Biomedical Research, Cambridge, MA, USA; Department of Biology, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
26
|
Li Y, Rose F, di Pietro F, Morin X, Genovesio A. Detection and tracking of overlapping cell nuclei for large scale mitosis analyses. BMC Bioinformatics 2016; 17:183. [PMID: 27112769 PMCID: PMC4845473 DOI: 10.1186/s12859-016-1030-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Accepted: 04/09/2016] [Indexed: 11/26/2022] Open
Abstract
Background Cell culture on printed micropatterns slides combined with automated fluorescent microscopy allows for extraction of tens of thousands of videos of small isolated growing cell clusters. The analysis of such large dataset in space and time is of great interest to the community in order to identify factors involved in cell growth, cell division or tissue formation by testing multiples conditions. However, cells growing on a micropattern tend to be tightly packed and to overlap with each other. Consequently, image analysis of those large dynamic datasets with no possible human intervention has proven impossible using state of the art automated cell detection methods. Results Here, we propose a fully automated image analysis approach to estimate the number, the location and the shape of each cell nucleus, in clusters at high throughput. The method is based on a robust fit of Gaussian mixture models with two and three components on each frame followed by an analysis over time of the fitting residual and two other relevant features. We use it to identify with high precision the very first frame containing three cells. This allows in our case to measure a cell division angle on each video and to construct division angle distributions for each tested condition. We demonstrate the accuracy of our method by validating it against manual annotation on about 4000 videos of cell clusters. Conclusions The proposed approach enables the high throughput analysis of video sequences of isolated cell clusters obtained using micropatterns. It relies only on two parameters that can be set robustly as they reduce to the average cell size and intensity.
Collapse
Affiliation(s)
- Yingbo Li
- Scientific Center for Computational Biology, Institut de Biologie de l'Ecole Normale Superieure, CNRS-INSERM-ENS, PSL Research University, 46, rue d'Ulm, Paris, 75005, France.,Division cellulaire et neurogenèse, Institut de Biologie de l'Ecole Normale Superieure, PSL Research University, 46, rue d'Ulm, Paris, 75005, France
| | - France Rose
- Scientific Center for Computational Biology, Institut de Biologie de l'Ecole Normale Superieure, CNRS-INSERM-ENS, PSL Research University, 46, rue d'Ulm, Paris, 75005, France
| | - Florencia di Pietro
- Division cellulaire et neurogenèse, Institut de Biologie de l'Ecole Normale Superieure, PSL Research University, 46, rue d'Ulm, Paris, 75005, France
| | - Xavier Morin
- Division cellulaire et neurogenèse, Institut de Biologie de l'Ecole Normale Superieure, PSL Research University, 46, rue d'Ulm, Paris, 75005, France
| | - Auguste Genovesio
- Scientific Center for Computational Biology, Institut de Biologie de l'Ecole Normale Superieure, CNRS-INSERM-ENS, PSL Research University, 46, rue d'Ulm, Paris, 75005, France.
| |
Collapse
|
27
|
Kudella PW, Moll K, Wahlgren M, Wixforth A, Westerhausen C. ARAM: an automated image analysis software to determine rosetting parameters and parasitaemia in Plasmodium samples. Malar J 2016; 15:223. [PMID: 27090910 PMCID: PMC4835829 DOI: 10.1186/s12936-016-1243-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2015] [Accepted: 03/30/2016] [Indexed: 11/14/2022] Open
Abstract
Background Rosetting is associated with severe malaria and a primary cause of death in Plasmodium falciparum infections. Detailed understanding of this adhesive phenomenon may enable the development of new therapies interfering with rosette formation. For this, it is crucial to determine parameters such as rosetting and parasitaemia of laboratory strains or patient isolates, a bottleneck in malaria research due to the time consuming and error prone manual analysis of specimens. Here, the automated, free, stand-alone analysis software automated rosetting analyzer for micrographs (ARAM) to determine rosetting rate, rosette size distribution as well as parasitaemia with a convenient graphical user interface is presented. Methods Automated rosetting analyzer for micrographs is an executable with two operation modes for automated identification of objects on images. The default mode detects red blood cells and fluorescently labelled parasitized red blood cells by combining an intensity-gradient with a threshold filter. The second mode determines object location and size distribution from a single contrast method. The obtained results are compared with standardized manual analysis. Automated rosetting analyzer for micrographs calculates statistical confidence probabilities for rosetting rate and parasitaemia. Results Automated rosetting analyzer for micrographs analyses 25 cell objects per second reliably delivering identical results compared to manual analysis. For the first time rosette size distribution is determined in a precise and quantitative manner employing ARAM in combination with established inhibition tests. Additionally ARAM measures the essential observables parasitaemia, rosetting rate and size as well as location of all detected objects and provides confidence intervals for the determined observables. No other existing software solution offers this range of function. The second, non-malaria specific, analysis mode of ARAM offers the functionality to detect arbitrary objects. Conclusions Automated rosetting analyzer for micrographs has the capability to push malaria research to a more quantitative and statistically significant level with increased reliability due to operator independence. As an installation file for Windows © 7, 8.1 and 10 is available for free, ARAM offers a novel open and easy-to-use platform for the malaria community to elucidate rosetting. Electronic supplementary material The online version of this article (doi:10.1186/s12936-016-1243-4) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
| | - Kirsten Moll
- Department of Microbiology, Tumor and Cell Biology, Karolinska Institutet, Box 280, 171 77, Stockholm, Sweden
| | - Mats Wahlgren
- Department of Microbiology, Tumor and Cell Biology, Karolinska Institutet, Box 280, 171 77, Stockholm, Sweden
| | - Achim Wixforth
- Experimental Physics I, University of Augsburg, Universitätsstraße 1, Augsburg, Germany.,Nanosystems Initiative Munich, Schellingstraße 4, Munich, Germany
| | - Christoph Westerhausen
- Experimental Physics I, University of Augsburg, Universitätsstraße 1, Augsburg, Germany. .,Nanosystems Initiative Munich, Schellingstraße 4, Munich, Germany.
| |
Collapse
|
28
|
Arteta C, Lempitsky V, Noble JA, Zisserman A. Detecting overlapping instances in microscopy images using extremal region trees. Med Image Anal 2015; 27:3-16. [PMID: 25980675 DOI: 10.1016/j.media.2015.03.002] [Citation(s) in RCA: 49] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Revised: 11/20/2014] [Accepted: 03/03/2015] [Indexed: 11/29/2022]
Abstract
In many microscopy applications the images may contain both regions of low and high cell densities corresponding to different tissues or colonies at different stages of growth. This poses a challenge to most previously developed automated cell detection and counting methods, which are designed to handle either the low-density scenario (through cell detection) or the high-density scenario (through density estimation or texture analysis). The objective of this work is to detect all the instances of an object of interest in microscopy images. The instances may be partially overlapping and clustered. To this end we introduce a tree-structured discrete graphical model that is used to select and label a set of non-overlapping regions in the image by a global optimization of a classification score. Each region is labeled with the number of instances it contains - for example regions can be selected that contain two or three object instances, by defining separate classes for tuples of objects in the detection process. We show that this formulation can be learned within the structured output SVM framework and that the inference in such a model can be accomplished using dynamic programming on a tree structured region graph. Furthermore, the learning only requires weak annotations - a dot on each instance. The candidate regions for the selection are obtained as extremal region of a surface computed from the microscopy image, and we show that the performance of the model can be improved by considering a proxy problem for learning the surface that allows better selection of the extremal regions. Furthermore, we consider a number of variations for the loss function used in the structured output learning. The model is applied and evaluated over six quite disparate data sets of images covering: fluorescence microscopy, weak-fluorescence molecular images, phase contrast microscopy and histopathology images, and is shown to exceed the state of the art in performance.
Collapse
Affiliation(s)
- Carlos Arteta
- Department of Engineering Science, University of Oxford, Oxford OX1 2JD, UK.
| | - Victor Lempitsky
- Skolkovo Institute of Science and Technology (Skoltech), Skolkovo 143025 Russia
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford OX1 2JD, UK
| | - Andrew Zisserman
- Department of Engineering Science, University of Oxford, Oxford OX1 2JD, UK
| |
Collapse
|
29
|
Cai S, Li G, Zhang X, Xia Y, Chen M, Wu D, Chen Q, Zhang J, Chen J. A signal-on fluorescent aptasensor based on single-stranded DNA-sensitized luminescence of terbium (III) for label-free detection of breast cancer cells. Talanta 2015; 138:225-230. [PMID: 25863395 DOI: 10.1016/j.talanta.2015.02.056] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2014] [Revised: 02/25/2015] [Accepted: 02/28/2015] [Indexed: 12/26/2022]
Abstract
Breast cancer is the most common type of malignant tumor in women. Recently, it has been shown that detection of breast cancer tumor cells outside the primitive tumor is an effective early diagnosis with great prognostic and clinical utility. For this purpose, we developed a signal-on fluorescence aptasensor for label-free, facile and sensitive detection of MCF-7 breast cancer cells. Due to target-aptamer specific recognition and single-stranded DNA-sensitized luminescence of terbium (III), the proposed aptasensor exhibits excellent sensitivity with detection limit as low as 70 cells mL(-1). Compared with common organic dyes and the emerging nano-technological probes, the combination of terbium (III) and single-stranded DNA signal probe (Tb(3+)-SP) serves as a more powerful bio-probe because of its stable optical property, good biocompatibility and free from complex synthesis. The feasibility investigations have illustrated the potential applicability of this aptasensor for selective and sensitive detection of MCF-7 breast cancer cells. Moreover, this proposed aptasensor can be also extended for the determination of other tumor cancers or bio-molecules by altering corresponding aptamers. Taken together, this easy-to-perform aptasensor may represent a promising way for early screening and detection of tumor cancers or other bio-molecules in clinical diagnosis.
Collapse
Affiliation(s)
- Shuxian Cai
- Department of Pharmaceutical Analysis, The School of Pharmacy, Fujian Medical University, Fuzhou, Fujian Province 350108, PR China
| | - Guangwen Li
- Department of Pharmaceutical Analysis, The School of Pharmacy, Fujian Medical University, Fuzhou, Fujian Province 350108, PR China
| | - Xi Zhang
- Department of Pharmaceutical Analysis, The School of Pharmacy, Fujian Medical University, Fuzhou, Fujian Province 350108, PR China
| | - Yaokun Xia
- Department of Pharmaceutical Analysis, The School of Pharmacy, Fujian Medical University, Fuzhou, Fujian Province 350108, PR China
| | - Mei Chen
- Department of Pharmaceutical Analysis, The School of Pharmacy, Fujian Medical University, Fuzhou, Fujian Province 350108, PR China
| | - Dongzhi Wu
- Department of Pharmaceutical Analysis, The School of Pharmacy, Fujian Medical University, Fuzhou, Fujian Province 350108, PR China
| | - Qiuxiang Chen
- Department of Pharmaceutical Analysis, The School of Pharmacy, Fujian Medical University, Fuzhou, Fujian Province 350108, PR China
| | - Jing Zhang
- The Ministry of Education Key Laboratory of Biopesticide and Chemical Biology, College of Life Sciences, Fujian Agriculture and Forestry University, Fuzhou, Fujian Province 350002, PR China.
| | - Jinghua Chen
- Department of Pharmaceutical Analysis, The School of Pharmacy, Fujian Medical University, Fuzhou, Fujian Province 350108, PR China.
| |
Collapse
|
30
|
Wandermur G, Rodrigues D, Allil R, Queiroz V, Peixoto R, Werneck M, Miguel M. Plastic optical fiber-based biosensor platform for rapid cell detection. Biosens Bioelectron 2013; 54:661-6. [PMID: 24334281 DOI: 10.1016/j.bios.2013.11.030] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2013] [Revised: 11/07/2013] [Accepted: 11/08/2013] [Indexed: 11/19/2022]
Abstract
This work presents a novel, fast response time, plastic optic fiber (POF) biosensor to detect Escherichia coli. It discloses the technique for the development, calibration and measurement of this robust and simple-to-construct POF biosensor. The probes in U-shaped format were manufactured with a specially developed device. The calibration process led to the evaluation of the sensitivity, accuracy and repeatability by using solutions of sucrose for obtaining refractive indices (RI) in the range 1.33-1.39 IR equivalent of water and bacteria, respectively. The POF probes were functionalized with antibody anti-E. coli serotype O55 and tested firstly with saline and then with bacterial concentrations of 10(4), 10(6), and 10(8) colony forming units/ml (CFU/ml). The optoelectronic setup consists of an 880 nm LED connected to the U-shaped probe driven by a sine waveform generated by the Simulink (from Matlab(®)). On the other side of the probe a photodetector generates a photocurrent which is amplified by a transconductance amplifier. The output voltage signal is read by the analog-to-digital (A/D) input of the microcontroller. In all tested concentrations, the results presented a tendency of a decrease in the output signal with time, due to the attachment of the bacteria to the POF probe and consequent increase in the RI close to the sensitive area of the fiber surface. It has been shown that the system is capable of providing positive response to the bacterial concentration in less than 10 min, demonstrating good possibilities to be commercially developed as a portable field sensor.
Collapse
Affiliation(s)
- Gisele Wandermur
- Photonic and Instrumentation Laboratory, Electrical Engeineering Program, Universidade Federal do Rio de Janeiro, Cidade Universitária, Rio de Janeiro, Brazil
| | - Domingos Rodrigues
- Photonic and Instrumentation Laboratory, Electrical Engeineering Program, Universidade Federal do Rio de Janeiro, Cidade Universitária, Rio de Janeiro, Brazil
| | - Regina Allil
- Division of Chemical, Biological and Nuclear Defence, Biological Defence Laboratory, Brazilian Army Technological Center, Rio de Janeiro, RJ, Brazil
| | - Vanessa Queiroz
- Photonic and Instrumentation Laboratory, Electrical Engeineering Program, Universidade Federal do Rio de Janeiro, Cidade Universitária, Rio de Janeiro, Brazil
| | - Raquel Peixoto
- Laboratory of Molecular Microbial Ecology, Department of General Microbiology, Universidade Federal do Rio de Janeiro, Cidade Universitária, Rio de Janeiro, Brazil
| | - Marcelo Werneck
- Photonic and Instrumentation Laboratory, Electrical Engeineering Program, Universidade Federal do Rio de Janeiro, Cidade Universitária, Rio de Janeiro, Brazil.
| | - Marco Miguel
- Food Microbiology Laboratory, Department of Medical Microbiology, Universidade Federal do Rio de Janeiro, Cidade Universitária, Rio de Janeiro, Brazil
| |
Collapse
|