1
|
Zhang C, Zhu J. AML leukocyte classification method for small samples based on ACGAN. BIOMED ENG-BIOMED TE 2024; 69:491-499. [PMID: 38547466 DOI: 10.1515/bmt-2024-0028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 03/13/2024] [Indexed: 10/06/2024]
Abstract
Leukemia is a class of hematologic malignancies, of which acute myeloid leukemia (AML) is the most common. Screening and diagnosis of AML are performed by microscopic examination or chemical testing of images of the patient's peripheral blood smear. In smear-microscopy, the ability to quickly identify, count, and differentiate different types of blood cells is critical for disease diagnosis. With the development of deep learning (DL), classification techniques based on neural networks have been applied to the recognition of blood cells. However, DL methods have high requirements for the number of valid datasets. This study aims to assess the applicability of the auxiliary classification generative adversarial network (ACGAN) in the classification task for small samples of white blood cells. The method is trained on the TCIA dataset, and the classification accuracy is compared with two classical classifiers and the current state-of-the-art methods. The results are evaluated using accuracy, precision, recall, and F1 score. The accuracy of the ACGAN on the validation set is 97.1 % and the precision, recall, and F1 scores on the validation set are 97.5 , 97.3, and 97.4 %, respectively. In addition, ACGAN received a higher score in comparison with other advanced methods, which can indicate that it is competitive in classification accuracy.
Collapse
Affiliation(s)
- Chenxuan Zhang
- School of Artificial Intelligence, 232838 Chongqing University of Technology , Chongqing, PR.China
| | - Junlin Zhu
- College of Computer Science and Cyber Security, 47908 Chengdu University of Technology , Chengdu, P.R. China
| |
Collapse
|
2
|
Wang J, Yang F, Wang B, Liu M, Wang X, Wang R, Song G, Wang Z. High-quality AFM image acquisition of living cells by modified residual encoder-decoder network. J Struct Biol 2024; 216:108107. [PMID: 38906499 DOI: 10.1016/j.jsb.2024.108107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Revised: 05/31/2024] [Accepted: 06/17/2024] [Indexed: 06/23/2024]
Abstract
Atomic force microscope enables ultra-precision imaging of living cells. However, atomic force microscope imaging is a complex and time-consuming process. The obtained images of living cells usually have low resolution and are easily influenced by noise leading to unsatisfactory imaging quality, obstructing the research and analysis based on cell images. Herein, an adaptive attention image reconstruction network based on residual encoder-decoder was proposed, through the combination of deep learning technology and atomic force microscope imaging supporting high-quality cell image acquisition. Compared with other learning-based methods, the proposed network showed higher peak signal-to-noise ratio, higher structural similarity and better image reconstruction performances. In addition, the cell images reconstructed by each method were used for cell recognition, and the cell images reconstructed by the proposed network had the highest cell recognition rate. The proposed network has brought insights into the atomic force microscope-based imaging of living cells and cell image reconstruction, which is of great significance in biological and medical research.
Collapse
Affiliation(s)
- Junxi Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China; College of Physics, Changchun University of Science and Technology, Changchun 130022, China
| | - Fan Yang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China; College of Physics, Changchun University of Science and Technology, Changchun 130022, China
| | - Bowei Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China; College of Physics, Changchun University of Science and Technology, Changchun 130022, China
| | - Mengnan Liu
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
| | - Xia Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China; College of Physics, Changchun University of Science and Technology, Changchun 130022, China
| | - Rui Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
| | - Guicai Song
- College of Physics, Changchun University of Science and Technology, Changchun 130022, China.
| | - Zuobin Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China; College of Physics, Changchun University of Science and Technology, Changchun 130022, China; JR3CN & IRAC, University of Bedfordshire, Luton LU1 3JU, UK.
| |
Collapse
|
3
|
Cioffi G, Dannhauser D, Rossi D, Netti PA, Causa F. Unknown cell class distinction via neural network based scattering snapshot recognition. BIOMEDICAL OPTICS EXPRESS 2023; 14:5060-5074. [PMID: 37854558 PMCID: PMC10581789 DOI: 10.1364/boe.492028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 07/25/2023] [Accepted: 07/31/2023] [Indexed: 10/20/2023]
Abstract
Neural network-based image classification is widely used in life science applications. However, it is essential to extrapolate a correct classification method for unknown images, where no prior knowledge can be utilised. Under a closed set assumption, unknown images will be inevitably misclassified, but this can be genuinely overcome choosing an open-set classification approach, which first generates an in-distribution of identified images to successively discriminate out-of-distribution images. The testing of such image classification for single cell applications in life science scenarios has yet to be done but could broaden our expertise in quantifying the influence of prediction uncertainty in deep learning. In this framework, we implemented the open-set concept on scattering snapshots of living cells to distinguish between unknown and known cell classes, targeting four different known monoblast cell classes and a single tumoral unknown monoblast cell line. We also investigated the influence on experimental sample errors and optimised neural network hyperparameters to obtain a high unknown cell class detection accuracy. We discovered that our open-set approach exhibits robustness against sample noise, a crucial aspect for its application in life science. Moreover, the presented open-set based neural network reveals measurement uncertainty out of the cell prediction, which can be applied to a wide range of single cell classifications.
Collapse
Affiliation(s)
- Gaia Cioffi
- Interdisciplinary Research Centre on Biomaterials (CRIB) and Dipartimento di Ingegneria Chimica, dei Materiali e della Produzione Industriale, Università degli Studi di Napoli “Federico II”, Piazzale Tecchio 80, 80125 Naples, Italy
| | - David Dannhauser
- Interdisciplinary Research Centre on Biomaterials (CRIB) and Dipartimento di Ingegneria Chimica, dei Materiali e della Produzione Industriale, Università degli Studi di Napoli “Federico II”, Piazzale Tecchio 80, 80125 Naples, Italy
| | - Domenico Rossi
- Center for Advanced Biomaterials for Healthcare@CRIB, Istituto Italiano di Tecnologia, Largo Barsanti e Matteucci 53, 80125 Naples, Italy
| | - Paolo A. Netti
- Interdisciplinary Research Centre on Biomaterials (CRIB) and Dipartimento di Ingegneria Chimica, dei Materiali e della Produzione Industriale, Università degli Studi di Napoli “Federico II”, Piazzale Tecchio 80, 80125 Naples, Italy
- Center for Advanced Biomaterials for Healthcare@CRIB, Istituto Italiano di Tecnologia, Largo Barsanti e Matteucci 53, 80125 Naples, Italy
| | - Filippo Causa
- Interdisciplinary Research Centre on Biomaterials (CRIB) and Dipartimento di Ingegneria Chimica, dei Materiali e della Produzione Industriale, Università degli Studi di Napoli “Federico II”, Piazzale Tecchio 80, 80125 Naples, Italy
| |
Collapse
|
4
|
Wang J, Gao M, Yang L, Huang Y, Wang J, Wang B, Song G, Wang Z. Cell recognition based on atomic force microscopy and modified residual neural network. J Struct Biol 2023; 215:107991. [PMID: 37451561 DOI: 10.1016/j.jsb.2023.107991] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 06/01/2023] [Accepted: 06/27/2023] [Indexed: 07/18/2023]
Abstract
Cell recognition methods are in high demand in cell biology and medicine, and the method based on atomic force microscopy (AFM) shows a great value in application. The difference in mechanical properties or morphology of cells has been frequently used to detect whether cells are cancerous, but this detection method cannot be a general means for cancer cell detection, and the traditional artificial feature extraction method also has its limitations. In this work, we proposed an analytic method based on the physical properties of cells and deep learning method for recognizing cell types. The residual neural network used for recognition was modified by multi-scale convolutional fusion, attention mechanism and depthwise separable convolution, so as to optimize feature extraction and reduce operation costs. In the method, the collected cells were imaged by AFM, and the processed images were analyzed by the optimized convolutional neural network. The recognition results of two groups of cells (HL-7702 and SMMC-7721, SGC-7901 and GES-1) by this method show that the recognition rate of dataset with the combination of cell surface morphology, adhesion and Young's modulus is higher, and the recognition rate of the dataset with optimal resolution is higher. Our study indicated that the recognition of physical properties of cells using deep learning technology can serve as a universal and effective method for the automated analysis of cell information.
Collapse
Affiliation(s)
- Junxi Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China
| | - Mingyan Gao
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China
| | - Lixin Yang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
| | - Yuxi Huang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
| | - Jiahe Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
| | - Bowei Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China
| | - Guicai Song
- College of Physics, Changchun University of Science and Technology, Changchun 130022, China.
| | - Zuobin Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China; JR3CN & IRAC, University of Bedfordshire, Luton LU1 3JU, UK.
| |
Collapse
|
5
|
Yoshida D, Akita K, Higaki T. Machine learning and feature analysis of the cortical microtubule organization of Arabidopsis cotyledon pavement cells. PROTOPLASMA 2023; 260:987-998. [PMID: 36219259 DOI: 10.1007/s00709-022-01813-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 09/30/2022] [Indexed: 06/16/2023]
Abstract
The measurement of cytoskeletal features can provide valuable insights into cell biology. In recent years, digital image analysis of cytoskeletal features has become an important research tool for quantitative evaluation of cytoskeleton organization. In this study, we examined the utility of a supervised machine learning approach with digital image analysis to distinguish different cellular organizational patterns. We focused on the jigsaw puzzle-shaped pavement cells of Arabidopsis thaliana. Measurements of three features of cortical microtubules in these cells (parallelness, density, and the coefficient of variation of the intensity distribution of fluorescently labeled cytoskeletons [as an indicator of microtubule bundling]) were obtained from microscopic images. A random forest machine learning model was then used with these images to differentiate mutant and wild type, and Taxol-treated and control cells. Using these three metrics, we were able to distinguish wild type from bpp125 triple mutant cells, with approximately 80% accuracy; classification accuracy was 88% for control and Taxol-treated cells. Different features contributed most to the classification, namely, coefficient of variation for the wild-type/mutant cells and parallelness for the Taxol-treated/control cells. The random forest method used enabled quantitative evaluation of the contribution of features to the classification, and partial dependence plots showed the relationships between metric values and classification accuracy. While further improvements to the method are needed, our small-scale analysis shows the potential for this approach in large-scale screening analyses.
Collapse
Affiliation(s)
- Daichi Yoshida
- Graduate School of Science and Technology, Kumamoto University, Kurokami, Chuo-ku, Kumamoto, 860-8555, Japan
| | - Kae Akita
- Department of Chemical and Biological Sciences, Faculty of Science, Japan Women's University, Meijirodai, Bunkyo-ku, Tokyo, 112-8681, Japan
| | - Takumi Higaki
- Graduate School of Science and Technology, Kumamoto University, Kurokami, Chuo-ku, Kumamoto, 860-8555, Japan.
- International Research Organization in Advanced Science and Technology, Kumamoto University, Kurokami, Chuo-ku, Kumamoto, 860-8555, Japan.
- International Research Center for Agricultural and Environmental Biology, Kumamoto University, Kurokami, Chuo-ku, Kumamoto, 860-8555, Japan.
| |
Collapse
|
6
|
Shifat-E-Rabbi M, Zhuang Y, Li S, Rubaiyat AHM, Yin X, Rohde GK. Invariance encoding in sliced-Wasserstein space for image classification with limited training data. PATTERN RECOGNITION 2023; 137:109268. [PMID: 36713887 PMCID: PMC9879373 DOI: 10.1016/j.patcog.2022.109268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Deep convolutional neural networks (CNNs) are broadly considered to be state-of-the-art generic end-to-end image classification systems. However, they are known to underperform when training data are limited and thus require data augmentation strategies that render the method computationally expensive and not always effective. Rather than using a data augmentation strategy to encode invariances as typically done in machine learning, here we propose to mathematically augment a nearest subspace classification model in sliced-Wasserstein space by exploiting certain mathematical properties of the Radon Cumulative Distribution Transform (R-CDT), a recently introduced image transform. We demonstrate that for a particular type of learning problem, our mathematical solution has advantages over data augmentation with deep CNNs in terms of classification accuracy and computational complexity, and is particularly effective under a limited training data setting. The method is simple, effective, computationally efficient, non-iterative, and requires no parameters to be tuned. Python code implementing our method is available at https://github.com/rohdelab/mathematical augmentation. Our method is integrated as a part of the software package PyTransKit, which is available at https://github.com/rohdelab/PyTransKit.
Collapse
Affiliation(s)
- Mohammad Shifat-E-Rabbi
- Imaging and Data Science Laboratory, University of Virginia, Charlottesville, VA 22908, USA
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22908, USA
| | - Yan Zhuang
- Imaging and Data Science Laboratory, University of Virginia, Charlottesville, VA 22908, USA
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA 22908, USA
| | - Shiying Li
- Imaging and Data Science Laboratory, University of Virginia, Charlottesville, VA 22908, USA
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22908, USA
| | - Abu Hasnat Mohammad Rubaiyat
- Imaging and Data Science Laboratory, University of Virginia, Charlottesville, VA 22908, USA
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA 22908, USA
| | - Xuwang Yin
- Imaging and Data Science Laboratory, University of Virginia, Charlottesville, VA 22908, USA
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA 22908, USA
| | - Gustavo K. Rohde
- Imaging and Data Science Laboratory, University of Virginia, Charlottesville, VA 22908, USA
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22908, USA
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA 22908, USA
| |
Collapse
|
7
|
Mamaeva A, Krasnova O, Khvorova I, Kozlov K, Gursky V, Samsonova M, Tikhonova O, Neganova I. Quality Control of Human Pluripotent Stem Cell Colonies by Computational Image Analysis Using Convolutional Neural Networks. Int J Mol Sci 2022; 24:ijms24010140. [PMID: 36613583 PMCID: PMC9820636 DOI: 10.3390/ijms24010140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 12/08/2022] [Accepted: 12/17/2022] [Indexed: 12/24/2022] Open
Abstract
Human pluripotent stem cells are promising for a wide range of research and therapeutic purposes. Their maintenance in culture requires the deep control of their pluripotent and clonal status. A non-invasive method for such control involves day-to-day observation of the morphological changes, along with imaging colonies, with the subsequent automatic assessment of colony phenotype using image analysis by machine learning methods. We developed a classifier using a convolutional neural network and applied it to discriminate between images of human embryonic stem cell (hESC) colonies with "good" and "bad" morphological phenotypes associated with a high and low potential for pluripotency and clonality maintenance, respectively. The training dataset included the phase-contrast images of hESC line H9, in which the morphological phenotype of each colony was assessed through visual analysis. The classifier showed a high level of accuracy (89%) in phenotype prediction. By training the classifier on cropped images of various sizes, we showed that the spatial scale of ~144 μm was the most informative in terms of classification quality, which was an intermediate size between the characteristic diameters of a single cell (~15 μm) and the entire colony (~540 μm). We additionally performed a proteomic analysis of several H9 cell samples used in the computational analysis and showed that cells of different phenotypes differentiated at the molecular level. Our results indicated that the proposed approach could be used as an effective method of non-invasive automated analysis to identify undesirable developmental anomalies during the propagation of pluripotent stem cells.
Collapse
Affiliation(s)
- Anastasiya Mamaeva
- Mathematical Biology and Bioinformatics Lab, Peter the Great St. Petersburg Polytechnic University, 195251 Saint Petersburg, Russia
| | - Olga Krasnova
- Institute of Cytology, 194064 Saint Petersburg, Russia
| | - Irina Khvorova
- Faculty of Biology, Saint-Petersburg State University, 199034 Saint Petersburg, Russia
| | - Konstantin Kozlov
- Mathematical Biology and Bioinformatics Lab, Peter the Great St. Petersburg Polytechnic University, 195251 Saint Petersburg, Russia
| | | | - Maria Samsonova
- Mathematical Biology and Bioinformatics Lab, Peter the Great St. Petersburg Polytechnic University, 195251 Saint Petersburg, Russia
| | - Olga Tikhonova
- Institute of Biomedical Chemistry, 119121 Moscow, Russia
| | - Irina Neganova
- Institute of Cytology, 194064 Saint Petersburg, Russia
- Correspondence:
| |
Collapse
|
8
|
Fekri-Ershad S, Al-Imari MJ, Hamad MH, Alsaffar MF, Hassan FG, Hadi ME, Mahdi KS. Cell Phenotype Classification Based on Joint of Texture Information and Multilayer Feature Extraction in DenseNet. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6895833. [PMID: 36479023 PMCID: PMC9722294 DOI: 10.1155/2022/6895833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 11/05/2022] [Accepted: 11/16/2022] [Indexed: 11/30/2022]
Abstract
Cell phenotype classification is a critical task in many medical applications, such as protein localization, gene effect identification, and cancer diagnosis in some types. Fluorescence imaging is the most efficient tool to analyze the biological characteristics of cells. So cell phenotype classification in fluorescence microscopy images has received increased attention from scientists in the last decade. The visible structures of cells are usually different in terms of shape, texture, relationship between intensities, etc. In this scope, most of the presented approaches use one type or joint of low-level and high-level features. In this paper, a new approach is proposed based on a combination of low-level and high-level features. An improved version of local quinary patterns is used to extract low-level texture features. Also, an innovative multilayer deep feature extraction method is performed to extract high-level features from DenseNet. In this respect, an output feature map of dense blocks is entered in a separate way to pooling and flatten layers, and finally, feature vectors are concatenated. The performance of the proposed approach is evaluated on the benchmark dataset 2D-HeLa in terms of accuracy. Also, the proposed approach is compared with state-of-the-art methods in terms of classification accuracy. Comparison of results demonstrates higher performance of the proposed approach in comparison with some efficient methods.
Collapse
Affiliation(s)
- Shervan Fekri-Ershad
- Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran
- Big Data Research Center, Najafabad Branch, Islamic Azad University, Najafabad, Iran
| | - Mustafa Jawad Al-Imari
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| | - Mohammed Hayder Hamad
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| | - Marwa Fadhil Alsaffar
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| | - Fuad Ghazi Hassan
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| | - Mazin Eidan Hadi
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| | - Karrar Salih Mahdi
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| |
Collapse
|
9
|
Walton RT, Singh A, Blainey PC. Pooled genetic screens with image-based profiling. Mol Syst Biol 2022; 18:e10768. [PMID: 36366905 PMCID: PMC9650298 DOI: 10.15252/msb.202110768] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 09/12/2022] [Accepted: 09/16/2022] [Indexed: 11/13/2022] Open
Abstract
Spatial structure in biology, spanning molecular, organellular, cellular, tissue, and organismal scales, is encoded through a combination of genetic and epigenetic factors in individual cells. Microscopy remains the most direct approach to exploring the intricate spatial complexity defining biological systems and the structured dynamic responses of these systems to perturbations. Genetic screens with deep single-cell profiling via image features or gene expression programs have the capacity to show how biological systems work in detail by cataloging many cellular phenotypes with one experimental assay. Microscopy-based cellular profiling provides information complementary to next-generation sequencing (NGS) profiling and has only recently become compatible with large-scale genetic screens. Optical screening now offers the scale needed for systematic characterization and is poised for further scale-up. We discuss how these methodologies, together with emerging technologies for genetic perturbation and microscopy-based multiplexed molecular phenotyping, are powering new approaches to reveal genotype-phenotype relationships.
Collapse
Affiliation(s)
- Russell T Walton
- Broad Institute of MIT and HarvardCambridgeMAUSA
- Department of Biological EngineeringMITCambridgeMAUSA
| | - Avtar Singh
- Broad Institute of MIT and HarvardCambridgeMAUSA
- Present address:
Department of Cellular and Tissue GenomicsGenentechSouth San FranciscoCAUSA
| | - Paul C Blainey
- Broad Institute of MIT and HarvardCambridgeMAUSA
- Department of Biological EngineeringMITCambridgeMAUSA
- Koch Institute for Integrative Cancer ResearchMITCambridgeMAUSA
| |
Collapse
|
10
|
Demagny J, Roussel C, Le Guyader M, Guiheneuf E, Harrivel V, Boyer T, Diouf M, Dussiot M, Demont Y, Garçon L. Combining imaging flow cytometry and machine learning for high-throughput schistocyte quantification: A SVM classifier development and external validation cohort. EBioMedicine 2022; 83:104209. [PMID: 35986949 PMCID: PMC9404284 DOI: 10.1016/j.ebiom.2022.104209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 07/25/2022] [Accepted: 07/25/2022] [Indexed: 11/29/2022] Open
Abstract
Background Schistocyte counts are a cornerstone of the diagnosis of thrombotic microangiopathy syndrome (TMA). Their manual quantification is complex and alternative automated methods suffer from pitfalls that limit their use. We report a method combining imaging flow cytometry (IFC) and artificial intelligence for the direct label-free and operator-independent quantification of schistocytes in whole blood. Methods We used 135,045 IFC images from blood acquisition among 14 patients to extract 188 features with IDEAS® software and 128 features from a convolutional neural network (CNN) with Keras framework in order to train a support vector machine (SVM) blood elements’ classifier used for schistocytes quantification. Finding Keras features showed better accuracy (94.03%, CI: 93.75-94.31%) than ideas features (91.54%, CI: 91.21-91.87%) in recognising whole-blood elements, and together they showed the best accuracy (95.64%, CI: 95.39-95.88%). We obtained an excellent correlation (0.93, CI: 0.90-0.96) between three haematologists and our method on a cohort of 102 patient samples. All patients with schistocytosis (>1% schistocytes) were detected with excellent specificity (91.3%, CI: 82.0-96.7%) and sensitivity (100%, CI: 89.4-100.0%). We confirmed these results with a similar specificity (91.1%, CI: 78.8-97.5%) and sensitivity (100%, CI: 88.1-100.0%) on a validation cohort (n=74) analysed in an independent healthcare centre. Simultaneous analysis of 16 samples in both study centres showed a very good correlation between the 2 imaging flow cytometers (Y=1.001x). Interpretation We demonstrate that IFC can represent a reliable tool for operator-independent schistocyte quantification with no pre-analytical processing which is of most importance in emergency situations such as TMA. Funding None.
Collapse
|
11
|
An automated cell line authentication method for AstraZeneca global cell bank using deep neural networks on brightfield images. Sci Rep 2022; 12:7894. [PMID: 35550583 PMCID: PMC9098893 DOI: 10.1038/s41598-022-12099-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 05/05/2022] [Indexed: 11/09/2022] Open
Abstract
Cell line authentication is important in the biomedical field to ensure that researchers are not working with misidentified cells. Short tandem repeat is the gold standard method, but has its own limitations, including being expensive and time-consuming. Deep neural networks achieve great success in the analysis of cellular images in a cost-effective way. However, because of the lack of centralized available datasets, whether or not cell line authentication can be replaced or supported by cell image classification is still a question. Moreover, the relationship between the incubation times and cellular images has not been explored in previous studies. In this study, we automated the process of the cell line authentication by using deep learning analysis of brightfield cell line images. We proposed a novel multi-task framework to identify cell lines from cell images and predict the duration of how long cell lines have been incubated simultaneously. Using thirty cell lines' data from the AstraZeneca Cell Bank, we demonstrated that our proposed method can accurately identify cell lines from brightfield images with a 99.8% accuracy and predicts the incubation durations for cell images with the coefficient of determination score of 0.927. Considering that new cell lines are continually added to the AstraZeneca Cell Bank, we integrated the transfer learning technique with the proposed system to deal with data from new cell lines not included in the pre-trained model. Our method achieved excellent performance with a precision of 97.7% and recall of 95.8% in the detection of 14 new cell lines. These results demonstrated that our proposed framework can effectively identify cell lines using brightfield images.
Collapse
|
12
|
Luo CY, Pearson P, Xu G, Rich SM. A Computer Vision-Based Approach for Tick Identification Using Deep Learning Models. INSECTS 2022; 13:116. [PMID: 35206690 PMCID: PMC8879515 DOI: 10.3390/insects13020116] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 01/18/2022] [Accepted: 01/19/2022] [Indexed: 12/21/2022]
Abstract
A wide range of pathogens, such as bacteria, viruses, and parasites can be transmitted by ticks and can cause diseases, such as Lyme disease, anaplasmosis, or Rocky Mountain spotted fever. Landscape and climate changes are driving the geographic range expansion of important tick species. The morphological identification of ticks is critical for the assessment of disease risk; however, this process is time-consuming, costly, and requires qualified taxonomic specialists. To address this issue, we constructed a tick identification tool that can differentiate the most encountered human-biting ticks, Amblyomma americanum, Dermacentor variabilis, and Ixodes scapularis, by implementing artificial intelligence methods with deep learning algorithms. Many convolutional neural network (CNN) models (such as VGG, ResNet, or Inception) have been used for image recognition purposes but it is still a very limited application in the use of tick identification. Here, we describe the modified CNN-based models which were trained using a large-scale molecularly verified dataset to identify tick species. The best CNN model achieved a 99.5% accuracy on the test set. These results demonstrate that a computer vision system is a potential alternative tool to help in prescreening ticks for identification, an earlier diagnosis of disease risk, and, as such, could be a valuable resource for health professionals.
Collapse
Affiliation(s)
| | | | | | - Stephen M. Rich
- Department of Microbiology, University of Massachusetts, Amherst, MA 01003, USA; (C.-Y.L.); (P.P.); (G.X.)
| |
Collapse
|
13
|
The Active Segmentation Platform for Microscopic Image Classification and Segmentation. Brain Sci 2021; 11:brainsci11121645. [PMID: 34942947 PMCID: PMC8699732 DOI: 10.3390/brainsci11121645] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 12/07/2021] [Indexed: 12/04/2022] Open
Abstract
Image segmentation still represents an active area of research since no universal solution can be identified. Traditional image segmentation algorithms are problem-specific and limited in scope. On the other hand, machine learning offers an alternative paradigm where predefined features are combined into different classifiers, providing pixel-level classification and segmentation. However, machine learning only can not address the question as to which features are appropriate for a certain classification problem. The article presents an automated image segmentation and classification platform, called Active Segmentation, which is based on ImageJ. The platform integrates expert domain knowledge, providing partial ground truth, with geometrical feature extraction based on multi-scale signal processing combined with machine learning. The approach in image segmentation is exemplified on the ISBI 2012 image segmentation challenge data set. As a second application we demonstrate whole image classification functionality based on the same principles. The approach is exemplified using the HeLa and HEp-2 data sets. Obtained results indicate that feature space enrichment properly balanced with feature selection functionality can achieve performance comparable to deep learning approaches. In summary, differential geometry can substantially improve the outcome of machine learning since it can enrich the underlying feature space with new geometrical invariant objects.
Collapse
|
14
|
Shifat-E-Rabbi M, Yin X, Rubaiyat AHM, Li S, Kolouri S, Aldroubi A, Nichols JM, Rohde GK. Radon Cumulative Distribution Transform Subspace Modeling for Image Classification. JOURNAL OF MATHEMATICAL IMAGING AND VISION 2021; 63:1185-1203. [PMID: 35464640 PMCID: PMC9032314 DOI: 10.1007/s10851-021-01052-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 07/16/2021] [Indexed: 06/14/2023]
Abstract
We present a new supervised image classification method applicable to a broad class of image deformation models. The method makes use of the previously described Radon Cumulative Distribution Transform (R-CDT) for image data, whose mathematical properties are exploited to express the image data in a form that is more suitable for machine learning. While certain operations such as translation, scaling, and higher-order transformations are challenging to model in native image space, we show the R-CDT can capture some of these variations and thus render the associated image classification problems easier to solve. The method - utilizing a nearest-subspace algorithm in the R-CDT space - is simple to implement, non-iterative, has no hyper-parameters to tune, is computationally efficient, label efficient, and provides competitive accuracies to state-of-the-art neural networks for many types of classification problems. In addition to the test accuracy performances, we show improvements (with respect to neural network-based methods) in terms of computational efficiency (it can be implemented without the use of GPUs), number of training samples needed for training, as well as out-of-distribution generalization. The Python code for reproducing our results is available at [1].
Collapse
Affiliation(s)
| | | | | | - Shiying Li
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22908, USA
| | - Soheil Kolouri
- Department of Computer Science, Vanderbilt University, Nashville, TN 37212, USA
| | - Akram Aldroubi
- Department of Mathematics, Vanderbilt University, Nashville, TN 37212, USA
| | | | - Gustavo K. Rohde
- Department of Biomedical Engineering and the Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA 22908, USA
| |
Collapse
|
15
|
Luo S, Nguyen KT, Nguyen BTT, Feng S, Shi Y, Elsayed A, Zhang Y, Zhou X, Wen B, Chierchia G, Talbot H, Bourouina T, Jiang X, Liu AQ. Deep learning-enabled imaging flow cytometry for high-speed Cryptosporidium and Giardia detection. Cytometry A 2021; 99:1123-1133. [PMID: 33550703 DOI: 10.1002/cyto.a.24321] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 02/01/2021] [Accepted: 02/03/2021] [Indexed: 12/19/2022]
Abstract
Imaging flow cytometry has become a popular technology for bioparticle image analysis because of its capability of capturing thousands of images per second. Nevertheless, the vast number of images generated by imaging flow cytometry imposes great challenges for data analysis especially when the species have similar morphologies. In this work, we report a deep learning-enabled high-throughput system for predicting Cryptosporidium and Giardia in drinking water. This system combines imaging flow cytometry and an efficient artificial neural network called MCellNet, which achieves a classification accuracy >99.6%. The system can detect Cryptosporidium and Giardia with a sensitivity of 97.37% and a specificity of 99.95%. The high-speed analysis reaches 346 frames per second, outperforming the state-of-the-art deep learning algorithm MobileNetV2 in speed (251 frames per second) with a comparable classification accuracy. The reported system empowers rapid, accurate, and high throughput bioparticle detection in clinical diagnostics, environmental monitoring and other potential biosensing applications.
Collapse
Affiliation(s)
- Shaobo Luo
- ESIEE, Universite Paris-Est, Noisy-le-Grand Cedex, France.,Nanyang Environment and Water Research Institute, Nanyang Technological University, Singapore, Singapore
| | - Kim Truc Nguyen
- Nanyang Environment and Water Research Institute, Nanyang Technological University, Singapore, Singapore.,School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Binh T T Nguyen
- School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Shilun Feng
- Nanyang Environment and Water Research Institute, Nanyang Technological University, Singapore, Singapore.,School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Yuzhi Shi
- School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Ahmed Elsayed
- ESIEE, Universite Paris-Est, Noisy-le-Grand Cedex, France
| | - Yi Zhang
- School of Mechanical & Aerospace Engineering, Nanyang Technological University, Singapore, Singapore
| | - Xiaohong Zhou
- Research Centre of Environmental and Health Sensing Technology, School of Environment, Tsinghua University, Beijing, China
| | - Bihan Wen
- School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | | | - Hugues Talbot
- CentraleSupelec, Universite Paris-Saclay, Saint-Aubin, France
| | | | - Xudong Jiang
- School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Ai Qun Liu
- Nanyang Environment and Water Research Institute, Nanyang Technological University, Singapore, Singapore.,School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
16
|
Woloshuk A, Khochare S, Almulhim AF, McNutt AT, Dean D, Barwinska D, Ferkowicz MJ, Eadon MT, Kelly KJ, Dunn KW, Hasan MA, El-Achkar TM, Winfree S. In Situ Classification of Cell Types in Human Kidney Tissue Using 3D Nuclear Staining. Cytometry A 2020; 99:707-721. [PMID: 33252180 DOI: 10.1002/cyto.a.24274] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 10/29/2020] [Accepted: 11/26/2020] [Indexed: 12/30/2022]
Abstract
To understand the physiology and pathology of disease, capturing the heterogeneity of cell types within their tissue environment is fundamental. In such an endeavor, the human kidney presents a formidable challenge because its complex organizational structure is tightly linked to key physiological functions. Advances in imaging-based cell classification may be limited by the need to incorporate specific markers that can link classification to function. Multiplex imaging can mitigate these limitations, but requires cumulative incorporation of markers, which may lead to tissue exhaustion. Furthermore, the application of such strategies in large scale 3-dimensional (3D) imaging is challenging. Here, we propose that 3D nuclear signatures from a DNA stain, DAPI, which could be incorporated in most experimental imaging, can be used for classifying cells in intact human kidney tissue. We developed an unsupervised approach that uses 3D tissue cytometry to generate a large training dataset of nuclei images (NephNuc), where each nucleus is associated with a cell type label. We then devised various supervised machine learning approaches for kidney cell classification and demonstrated that a deep learning approach outperforms classical machine learning or shape-based classifiers. Specifically, a custom 3D convolutional neural network (NephNet3D) trained on nuclei image volumes achieved a balanced accuracy of 80.26%. Importantly, integrating NephNet3D classification with tissue cytometry allowed in situ visualization of cell type classifications in kidney tissue. In conclusion, we present a tissue cytometry and deep learning approach for in situ classification of cell types in human kidney tissue using only a DNA stain. This methodology is generalizable to other tissues and has potential advantages on tissue economy and non-exhaustive classification of different cell types.
Collapse
Affiliation(s)
- Andre Woloshuk
- Department of Medicine, Division of Nephrology and Hypertension, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Suraj Khochare
- Department of Medicine, Division of Nephrology and Hypertension, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Aljohara F Almulhim
- Department of Computer Science, Indiana University Purdue University, Indianapolis, Indiana, USA
| | - Andrew T McNutt
- Department of Medicine, Division of Nephrology and Hypertension, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Dawson Dean
- Department of Medicine, Division of Nephrology and Hypertension, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Daria Barwinska
- Department of Medicine, Division of Nephrology and Hypertension, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Michael J Ferkowicz
- Department of Medicine, Division of Nephrology and Hypertension, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Michael T Eadon
- Department of Medicine, Division of Nephrology and Hypertension, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Katherine J Kelly
- Department of Medicine, Division of Nephrology and Hypertension, Indiana University School of Medicine, Indianapolis, Indiana, USA.,Department of Medicine, Indianapolis VA Medical Center, Indianapolis, Indiana, USA
| | - Kenneth W Dunn
- Department of Medicine, Division of Nephrology and Hypertension, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Mohammad A Hasan
- Department of Computer Science, Indiana University Purdue University, Indianapolis, Indiana, USA
| | - Tarek M El-Achkar
- Department of Medicine, Division of Nephrology and Hypertension, Indiana University School of Medicine, Indianapolis, Indiana, USA.,Department of Medicine, Indianapolis VA Medical Center, Indianapolis, Indiana, USA.,Department of Anatomy, Cell Biology and Physiology, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Seth Winfree
- Department of Medicine, Division of Nephrology and Hypertension, Indiana University School of Medicine, Indianapolis, Indiana, USA.,Department of Medicine, Indianapolis VA Medical Center, Indianapolis, Indiana, USA.,Department of Anatomy, Cell Biology and Physiology, Indiana University School of Medicine, Indianapolis, Indiana, USA
| |
Collapse
|
17
|
Meijering E. A bird's-eye view of deep learning in bioimage analysis. Comput Struct Biotechnol J 2020; 18:2312-2325. [PMID: 32994890 PMCID: PMC7494605 DOI: 10.1016/j.csbj.2020.08.003] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/26/2020] [Accepted: 08/01/2020] [Indexed: 02/07/2023] Open
Abstract
Deep learning of artificial neural networks has become the de facto standard approach to solving data analysis problems in virtually all fields of science and engineering. Also in biology and medicine, deep learning technologies are fundamentally transforming how we acquire, process, analyze, and interpret data, with potentially far-reaching consequences for healthcare. In this mini-review, we take a bird's-eye view at the past, present, and future developments of deep learning, starting from science at large, to biomedical imaging, and bioimage analysis in particular.
Collapse
Affiliation(s)
- Erik Meijering
- School of Computer Science and Engineering & Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|