1
|
Ding X, Song Z, Xu J, Hou Y, Yang T, Shan Z. Scalable parameterized quantum circuits classifier. Sci Rep 2024; 14:15886. [PMID: 38987660 PMCID: PMC11237021 DOI: 10.1038/s41598-024-66394-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 07/01/2024] [Indexed: 07/12/2024] Open
Abstract
As a generalized quantum machine learning model, parameterized quantum circuits (PQC) have been found to perform poorly in terms of classification accuracy and model scalability for multi-category classification tasks. To address this issue, we propose a scalable parameterized quantum circuits classifier (SPQCC), which performs per-channel PQC and combines the measurements as the output of the trainable parameters of the classifier. By minimizing the cross-entropy loss through optimizing the trainable parameters of PQC, SPQCC leads to a fast convergence of the classifier. The parallel execution of identical PQCs on different quantum machines with the same structure and scale reduces the complexity of classifier design. Classification simulations performed on the MNIST Dataset show that the accuracy of our proposed classifier far exceeds that of other quantum classification algorithms, achieving the state-of-the-art simulation result and surpassing/reaching classical classifiers with a considerable number of trainable parameters. Our classifier demonstrates excellent scalability and classification performance.
Collapse
Affiliation(s)
- Xiaodong Ding
- Laboratory for Advanced Computing and Intelligence Engineering, Zhengzhou, 450001, China
| | - Zhihui Song
- Laboratory for Advanced Computing and Intelligence Engineering, Zhengzhou, 450001, China
| | - Jinchen Xu
- Laboratory for Advanced Computing and Intelligence Engineering, Zhengzhou, 450001, China
| | - Yifan Hou
- Laboratory for Advanced Computing and Intelligence Engineering, Zhengzhou, 450001, China
| | - Tian Yang
- Laboratory for Advanced Computing and Intelligence Engineering, Zhengzhou, 450001, China
| | - Zheng Shan
- Laboratory for Advanced Computing and Intelligence Engineering, Zhengzhou, 450001, China.
| |
Collapse
|
2
|
Yan F, Mutembei B, Valerio T, Gunay G, Ha JH, Zhang Q, Wang C, Selvaraj Mercyshalinie ER, Alhajeri ZA, Zhang F, Dockery LE, Li X, Liu R, Dhanasekaran DN, Acar H, Chen WR, Tang Q. Optical coherence tomography for multicellular tumor spheroid category recognition and drug screening classification via multi-spatial-superficial-parameter and machine learning. BIOMEDICAL OPTICS EXPRESS 2024; 15:2014-2047. [PMID: 38633082 PMCID: PMC11019711 DOI: 10.1364/boe.514079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 02/12/2024] [Accepted: 02/20/2024] [Indexed: 04/19/2024]
Abstract
Optical coherence tomography (OCT) is an ideal imaging technique for noninvasive and longitudinal monitoring of multicellular tumor spheroids (MCTS). However, the internal structure features within MCTS from OCT images are still not fully utilized. In this study, we developed cross-statistical, cross-screening, and composite-hyperparameter feature processing methods in conjunction with 12 machine learning models to assess changes within the MCTS internal structure. Our results indicated that the effective features combined with supervised learning models successfully classify OVCAR-8 MCTS culturing with 5,000 and 50,000 cell numbers, MCTS with pancreatic tumor cells (Panc02-H7) culturing with the ratio of 0%, 33%, 50%, and 67% of fibroblasts, and OVCAR-4 MCTS treated by 2-methoxyestradiol, AZD1208, and R-ketorolac with concentrations of 1, 10, and 25 µM. This approach holds promise for obtaining multi-dimensional physiological and functional evaluations for using OCT and MCTS in anticancer studies.
Collapse
Affiliation(s)
- Feng Yan
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bornface Mutembei
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Trisha Valerio
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Gokhan Gunay
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Ji-Hee Ha
- Department of Cell Biology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Qinghao Zhang
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Chen Wang
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | | | - Zaid A. Alhajeri
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Fan Zhang
- Department of Radiology, School of Medicine, University of Washington, Seattle, WA 98195, USA
| | - Lauren E. Dockery
- Stephenson Cancer Center, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Xinwei Li
- Department of Electrical and Electronic Engineering, University of Nottingham, Nottingham NG7 2RD, United Kingdom
| | - Ronghao Liu
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan 250100, China
| | - Danny N. Dhanasekaran
- Department of Cell Biology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Handan Acar
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
- Institute for Biomedical Engineering, Science, and Technology (IBEST), University of Oklahoma Norman, OK 73019, USA
| | - Wei R. Chen
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
- Institute for Biomedical Engineering, Science, and Technology (IBEST), University of Oklahoma Norman, OK 73019, USA
| | - Qinggong Tang
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
- Institute for Biomedical Engineering, Science, and Technology (IBEST), University of Oklahoma Norman, OK 73019, USA
| |
Collapse
|
3
|
Treepong P, Theera-Ampornpunt N. Early bread mold detection through microscopic images using convolutional neural network. Curr Res Food Sci 2023; 7:100574. [PMID: 37664007 PMCID: PMC10474362 DOI: 10.1016/j.crfs.2023.100574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 07/25/2023] [Accepted: 08/20/2023] [Indexed: 09/05/2023] Open
Abstract
Mold on bread in the early stages of growth is difficult to discern with the naked eye. Visual inspection and expiration dates are imprecise approaches that consumers rely on to detect bread spoilage. Existing methods for detecting microbial contamination, such as inspection through a microscope and hyperspectral imaging, are unsuitable for consumer use. This paper proposes a novel early bread mold detection method through microscopic images taken using clip-on lenses. These low-cost lenses are used together with a smartphone to capture images of bread at 50× magnification. The microscopic images are automatically classified using state-of-the-art convolutional neural networks (CNNs) with transfer learning. We extensively compared image preprocessing methods, CNN models, and data augmentation methods to determine the best configuration in terms of classification accuracy. The top models achieved near-perfect F 1 scores of 0.9948 for white sandwich bread and 0.9972 for whole wheat bread.
Collapse
Affiliation(s)
- Panisa Treepong
- College of Computing, Prince of Songkla University, Phuket, Thailand
| | | |
Collapse
|
4
|
Dudaie M, Barnea I, Nissim N, Shaked NT. On-chip label-free cell classification based directly on off-axis holograms and spatial-frequency-invariant deep learning. Sci Rep 2023; 13:12370. [PMID: 37524884 PMCID: PMC10390541 DOI: 10.1038/s41598-023-38160-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 07/04/2023] [Indexed: 08/02/2023] Open
Abstract
We present a rapid label-free imaging flow cytometry and cell classification approach based directly on raw digital holograms. Off-axis holography enables real-time acquisition of cells during rapid flow. However, classification of the cells typically requires reconstruction of their quantitative phase profiles, which is time-consuming. Here, we present a new approach for label-free classification of individual cells based directly on the raw off-axis holographic images, each of which contains the complete complex wavefront (amplitude and quantitative phase profiles) of the cell. To obtain this, we built a convolutional neural network, which is invariant to the spatial frequencies and directions of the interference fringes of the off-axis holograms. We demonstrate the effectiveness of this approach using four types of cancer cells. This approach has the potential to significantly improve both speed and robustness of imaging flow cytometry, enabling real-time label-free classification of individual cells.
Collapse
Affiliation(s)
- Matan Dudaie
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Itay Barnea
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Noga Nissim
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Natan T Shaked
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel.
| |
Collapse
|
5
|
Harrison PJ, Gupta A, Rietdijk J, Wieslander H, Carreras-Puigvert J, Georgiev P, Wählby C, Spjuth O, Sintorn IM. Evaluating the utility of brightfield image data for mechanism of action prediction. PLoS Comput Biol 2023; 19:e1011323. [PMID: 37490493 PMCID: PMC10403126 DOI: 10.1371/journal.pcbi.1011323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 08/04/2023] [Accepted: 07/02/2023] [Indexed: 07/27/2023] Open
Abstract
Fluorescence staining techniques, such as Cell Painting, together with fluorescence microscopy have proven invaluable for visualizing and quantifying the effects that drugs and other perturbations have on cultured cells. However, fluorescence microscopy is expensive, time-consuming, labor-intensive, and the stains applied can be cytotoxic, interfering with the activity under study. The simplest form of microscopy, brightfield microscopy, lacks these downsides, but the images produced have low contrast and the cellular compartments are difficult to discern. Nevertheless, by harnessing deep learning, these brightfield images may still be sufficient for various predictive purposes. In this study, we compared the predictive performance of models trained on fluorescence images to those trained on brightfield images for predicting the mechanism of action (MoA) of different drugs. We also extracted CellProfiler features from the fluorescence images and used them to benchmark the performance. Overall, we found comparable and largely correlated predictive performance for the two imaging modalities. This is promising for future studies of MoAs in time-lapse experiments for which using fluorescence images is problematic. Explorations based on explainable AI techniques also provided valuable insights regarding compounds that were better predicted by one modality over the other.
Collapse
Affiliation(s)
- Philip John Harrison
- Department of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden
- Science for Life Laboratory, Uppsala, Sweden
| | - Ankit Gupta
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Jonne Rietdijk
- Department of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden
- Science for Life Laboratory, Uppsala, Sweden
| | - Håkan Wieslander
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Jordi Carreras-Puigvert
- Department of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden
- Science for Life Laboratory, Uppsala, Sweden
| | - Polina Georgiev
- Department of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden
- Science for Life Laboratory, Uppsala, Sweden
| | - Carolina Wählby
- Science for Life Laboratory, Uppsala, Sweden
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Ola Spjuth
- Department of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden
- Science for Life Laboratory, Uppsala, Sweden
| | - Ida-Maria Sintorn
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
6
|
Al-Dulaimi K, Banks J, Al-Sabaawi A, Nguyen K, Chandran V, Tomeo-Reyes I. Classification of HEp-2 Staining Pattern Images Using Adapted Multilayer Perceptron Neural Network-Based Intra-Class Variation of Cell Shape. SENSORS (BASEL, SWITZERLAND) 2023; 23:2195. [PMID: 36850793 PMCID: PMC9959868 DOI: 10.3390/s23042195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 02/01/2023] [Accepted: 02/08/2023] [Indexed: 06/18/2023]
Abstract
There exists a growing interest from the clinical practice research communities in the development of methods to automate HEp-2 stained cells classification procedure from histopathological images. Challenges faced by these methods include variations in cell densities and cell patterns, overfitting of features, large-scale data volume and stained cells. In this paper, a multi-class multilayer perceptron technique is adapted by adding a new hidden layer to calculate the variation in the mean, scale, kurtosis and skewness of higher order spectra features of the cell shape information. The adapted technique is then jointly trained and the probability of classification calculated using a Softmax activation function. This method is proposed to address overfitting, stained and large-scale data volume problems, and classify HEp-2 staining cells into six classes. An extensive experimental analysis is studied to verify the results of the proposed method. The technique has been trained and tested on the dataset from ICPR-2014 and ICPR-2016 competitions using the Task-1. The experimental results have shown that the proposed model achieved higher accuracy of 90.3% (with data augmentation) than of 87.5% (with no data augmentation). In addition, the proposed framework is compared with existing methods, as well as, the results of methods using in ICPR2014 and ICPR2016 competitions.The results demonstrate that our proposed method effectively outperforms recent methods.
Collapse
Affiliation(s)
- Khamael Al-Dulaimi
- School of Electrical Engineering and Robotics, Queensland University of Technology (QUT), Brisbane, QLD 4000, Australia
| | - Jasmine Banks
- School of Electrical Engineering and Robotics, Queensland University of Technology (QUT), Brisbane, QLD 4000, Australia
| | - Aiman Al-Sabaawi
- School of Computer Science, Queensland University of Technology (QUT), Brisbane, QLD 4000, Australia
| | - Kien Nguyen
- School of Electrical Engineering and Robotics, Queensland University of Technology (QUT), Brisbane, QLD 4000, Australia
| | - Vinod Chandran
- School of Electrical Engineering and Robotics, Queensland University of Technology (QUT), Brisbane, QLD 4000, Australia
| | - Inmaculada Tomeo-Reyes
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia
| |
Collapse
|
7
|
Rodriguez-Conde I, Campos C, Fdez-Riverola F. Horizontally Distributed Inference of Deep Neural Networks for AI-Enabled IoT. SENSORS (BASEL, SWITZERLAND) 2023; 23:1911. [PMID: 36850508 PMCID: PMC9958567 DOI: 10.3390/s23041911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 02/02/2023] [Accepted: 02/05/2023] [Indexed: 06/18/2023]
Abstract
Motivated by the pervasiveness of artificial intelligence (AI) and the Internet of Things (IoT) in the current "smart everything" scenario, this article provides a comprehensive overview of the most recent research at the intersection of both domains, focusing on the design and development of specific mechanisms for enabling a collaborative inference across edge devices towards the in situ execution of highly complex state-of-the-art deep neural networks (DNNs), despite the resource-constrained nature of such infrastructures. In particular, the review discusses the most salient approaches conceived along those lines, elaborating on the specificities of the partitioning schemes and the parallelism paradigms explored, providing an organized and schematic discussion of the underlying workflows and associated communication patterns, as well as the architectural aspects of the DNNs that have driven the design of such techniques, while also highlighting both the primary challenges encountered at the design and operational levels and the specific adjustments or enhancements explored in response to them.
Collapse
Affiliation(s)
- Ivan Rodriguez-Conde
- Department of Computer Science, University of Arkansas at Little Rock, 2801 South University Avenue, Little Rock, AR 72204, USA
| | - Celso Campos
- Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
| | - Florentino Fdez-Riverola
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| |
Collapse
|
8
|
Fang W, Xiong T, Pak OS, Zhu L. Data-Driven Intelligent Manipulation of Particles in Microfluidics. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2205382. [PMID: 36538743 PMCID: PMC9929134 DOI: 10.1002/advs.202205382] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 11/17/2022] [Indexed: 05/30/2023]
Abstract
Automated manipulation of small particles using external (e.g., magnetic, electric and acoustic) fields has been an emerging technique widely used in different areas. The manipulation typically necessitates a reduced-order physical model characterizing the field-driven motion of particles in a complex environment. Such models are available only for highly idealized settings but are absent for a general scenario of particle manipulation typically involving complex nonlinear processes, which has limited its application. In this work, the authors present a data-driven architecture for controlling particles in microfluidics based on hydrodynamic manipulation. The architecture replaces the difficult-to-derive model by a generally trainable artificial neural network to describe the kinematics of particles, and subsequently identifies the optimal operations to manipulate particles. The authors successfully demonstrate a diverse set of particle manipulations in a numerically emulated microfluidic chamber, including targeted assembly of particles and subsequent navigation of the assembled cluster, simultaneous path planning for multiple particles, and steering one particle through obstacles. The approach achieves both spatial and temporal controllability of high precision for these settings. This achievement revolutionizes automated particle manipulation, showing the potential of data-driven approaches and machine learning in improving microfluidic technologies for enhanced flexibility and intelligence.
Collapse
Affiliation(s)
- Wen‐Zhen Fang
- Department of Mechanical EngineeringNational University of SingaporeSingapore117575Singapore
- Key Laboratory of Thermo‐Fluid Science and EngineeringMOE, Xi'an Jiaotong UniversityXi'an710049China
| | - Tongzhao Xiong
- Department of Mechanical EngineeringNational University of SingaporeSingapore117575Singapore
| | - On Shun Pak
- Department of Mechanical EngineeringSanta Clara UniversitySanta ClaraCA95053USA
| | - Lailai Zhu
- Department of Mechanical EngineeringNational University of SingaporeSingapore117575Singapore
| |
Collapse
|
9
|
Deep learning based semantic segmentation and quantification for MRD biochip images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
10
|
Zhao Y, Qin J, Wang S, Xu Z. Unraveling the morphological complexity of two-dimensional macromolecules. PATTERNS 2022; 3:100497. [PMID: 35755877 PMCID: PMC9214330 DOI: 10.1016/j.patter.2022.100497] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/05/2022] [Accepted: 03/28/2022] [Indexed: 11/15/2022]
Abstract
2D macromolecules, such as graphene and graphene oxide, possess a rich spectrum of conformational phases. However, their morphological classification has only been discussed by visual inspection, where the physics of deformation and surface contact cannot be resolved. We employ machine learning methods to address this problem by exploring samples generated by molecular simulations. Features such as metric changes, curvature, conformational anisotropy and surface contact are extracted. Unsupervised learning classifies the morphologies into the quasi-flat, folded, crumpled phases and interphases using geometrical and topological labels or the principal features of the 2D energy map. The results are fed into subsequent supervised learning for phase characterization. The performance of data-driven models is improved notably by integrating the physics of geometrical deformation and topological contact. The classification and feature extraction characterize the microstructures of their condensed phases and the molecular processes of adsorption and transport, comprehending the processing-microstructures-performance relation in applications. Morphology of 2D macromolecules are classified into four phases Data-driven models capture physics and topology beyond the geometry Condensed-phase properties are understood by the features extracted
Resolving morphological complexity of macromolecules is the stepping stone to the design and fabrication of high-performance, multi-functional materials and to understanding the soft matter behaviors in biology and engineering. To extract the physics of lattice distortion and surface contact beyond the conformation is critical, yet challenging. Here, we show that, by labeling the simulation data using the 2D map of potential energies, the 3D geometry, and the topology of contact, morphological classification can be achieved with high accuracy. The well-trained model can be used to decipher the microstructural complexity using simulation or experimental data, which may include the geometrical representation only. This data-driven approach extracts the key geometrical and topological features of 2D macromolecules that are directly responsible for the material performance in relevant applications and can be extended to study other complex surfaces such as red blood cells and the brain.
Collapse
Affiliation(s)
- Yingjie Zhao
- Applied Mechanics Laboratory, Department of Engineering Mechanics and Center for Nano and Micro Mechanics, Tsinghua University, Beijing 100084, China
| | - Jianshu Qin
- Applied Mechanics Laboratory, Department of Engineering Mechanics and Center for Nano and Micro Mechanics, Tsinghua University, Beijing 100084, China
| | - Shijun Wang
- Applied Mechanics Laboratory, Department of Engineering Mechanics and Center for Nano and Micro Mechanics, Tsinghua University, Beijing 100084, China
| | - Zhiping Xu
- Applied Mechanics Laboratory, Department of Engineering Mechanics and Center for Nano and Micro Mechanics, Tsinghua University, Beijing 100084, China
- Corresponding author
| |
Collapse
|
11
|
Automatic Cancer Cell Taxonomy Using an Ensemble of Deep Neural Networks. Cancers (Basel) 2022; 14:cancers14092224. [PMID: 35565352 PMCID: PMC9100154 DOI: 10.3390/cancers14092224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 04/18/2022] [Accepted: 04/26/2022] [Indexed: 12/24/2022] Open
Abstract
Microscopic image-based analysis has been intensively performed for pathological studies and diagnosis of diseases. However, mis-authentication of cell lines due to misjudgments by pathologists has been recognized as a serious problem. To address this problem, we propose a deep-learning-based approach for the automatic taxonomy of cancer cell types. A total of 889 bright-field microscopic images of four cancer cell lines were acquired using a benchtop microscope. Individual cells were further segmented and augmented to increase the image dataset. Afterward, deep transfer learning was adopted to accelerate the classification of cancer types. Experiments revealed that the deep-learning-based methods outperformed traditional machine-learning-based methods. Moreover, the Wilcoxon signed-rank test showed that deep ensemble approaches outperformed individual deep-learning-based models (p < 0.001) and were in effect to achieve the classification accuracy up to 97.735%. Additional investigation with the Wilcoxon signed-rank test was conducted to consider various network design choices, such as the type of optimizer, type of learning rate scheduler, degree of fine-tuning, and use of data augmentation. Finally, it was found that the using data augmentation and updating all the weights of a network during fine-tuning improve the overall performance of individual convolutional neural network models.
Collapse
|
12
|
Wang X, Kittaka M, He Y, Zhang Y, Ueki Y, Kihara D. OC_Finder: Osteoclast Segmentation, Counting, and Classification Using Watershed and Deep Learning. FRONTIERS IN BIOINFORMATICS 2022; 2. [PMID: 35474753 PMCID: PMC9038109 DOI: 10.3389/fbinf.2022.819570] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Osteoclasts are multinucleated cells that exclusively resorb bone matrix proteins and minerals on the bone surface. They differentiate from monocyte/macrophage lineage cells in the presence of osteoclastogenic cytokines such as the receptor activator of nuclear factor-κB ligand (RANKL) and are stained positive for tartrate-resistant acid phosphatase (TRAP). In vitro osteoclast formation assays are commonly used to assess the capacity of osteoclast precursor cells for differentiating into osteoclasts wherein the number of TRAP-positive multinucleated cells is counted as osteoclasts. Osteoclasts are manually identified on cell culture dishes by human eyes, which is a labor-intensive process. Moreover, the manual procedure is not objective and results in lack of reproducibility. To accelerate the process and reduce the workload for counting the number of osteoclasts, we developed OC_Finder, a fully automated system for identifying osteoclasts in microscopic images. OC_Finder consists of cell image segmentation with a watershed algorithm and cell classification using deep learning. OC_Finder detected osteoclasts differentiated from wild-type and Sh3bp2KI/+ precursor cells at a 99.4% accuracy for segmentation and at a 98.1% accuracy for classification. The number of osteoclasts classified by OC_Finder was at the same accuracy level with manual counting by a human expert. OC_Finder also showed consistent performance on additional datasets collected with different microscopes with different settings by different operators. Together, successful development of OC_Finder suggests that deep learning is a useful tool to perform prompt and accurate unbiased classification and detection of specific cell types in microscopic images.
Collapse
Affiliation(s)
- Xiao Wang
- Department of Computer Science, Purdue University, West Lafayette, IN, United States
| | - Mizuho Kittaka
- Department of Biomedical Sciences and Comprehensive Care, Indiana University School of Dentistry, Indianapolis, IN, United States
- Indiana Center for Musculoskeletal Health, Indiana University School of Medicine, Indianapolis, IN, United States
| | - Yilin He
- School of Software Engineering, Shandong University, Jinan, China
| | - Yiwei Zhang
- Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY, United States
| | - Yasuyoshi Ueki
- Department of Biomedical Sciences and Comprehensive Care, Indiana University School of Dentistry, Indianapolis, IN, United States
- Indiana Center for Musculoskeletal Health, Indiana University School of Medicine, Indianapolis, IN, United States
| | - Daisuke Kihara
- Department of Computer Science, Purdue University, West Lafayette, IN, United States
- Department of Biological Sciences, Purdue University, West Lafayette, IN, United States
- Purdue Cancer Research Institute, Purdue University, West Lafayette, IN, United States
- *Correspondence: Daisuke Kihara,
| |
Collapse
|
13
|
Tewary S, Mukhopadhyay S. AutoIHCNet: CNN architecture and decision fusion for automated HER2 scoring. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
14
|
Kociołek M, Kozłowski M, Cardone A. A Convolutional Neural Networks-Based Approach for Texture Directionality Detection. SENSORS 2022; 22:s22020562. [PMID: 35062522 PMCID: PMC8778371 DOI: 10.3390/s22020562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 01/04/2022] [Accepted: 01/07/2022] [Indexed: 02/04/2023]
Abstract
The perceived texture directionality is an important, not fully explored image characteristic. In many applications texture directionality detection is of fundamental importance. Several approaches have been proposed, such as the fast Fourier-based method. We recently proposed a method based on the interpolated grey-level co-occurrence matrix (iGLCM), robust to image blur and noise but slower than the Fourier-based method. Here we test the applicability of convolutional neural networks (CNNs) to texture directionality detection. To obtain the large amount of training data required, we built a training dataset consisting of synthetic textures with known directionality and varying perturbation levels. Subsequently, we defined and tested shallow and deep CNN architectures. We present the test results focusing on the CNN architectures and their robustness with respect to image perturbations. We identify the best performing CNN architecture, and compare it with the iGLCM, the Fourier and the local gradient orientation methods. We find that the accuracy of CNN is lower, yet comparable to the iGLCM, and it outperforms the other two methods. As expected, the CNN method shows the highest computing speed. Finally, we demonstrate the best performing CNN on real-life images. Visual analysis suggests that the learned patterns generalize to real-life image data. Hence, CNNs represent a promising approach for texture directionality detection, warranting further investigation.
Collapse
Affiliation(s)
- Marcin Kociołek
- Institute of Electronics, Lodz University of Technology, Al. Politechniki 10, 93-590 Łódź, Poland
- Correspondence: ; Tel.: +48-603-291-300
| | - Michał Kozłowski
- Department of Mechatronics, Faculty of Technical Science, University of Warmia and Mazury, Ul. Oczapowskiego 11, 10-710 Olsztyn, Poland;
| | - Antonio Cardone
- Information Technology Laboratory, Software and Systems Division, National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, MD 20899, USA;
| |
Collapse
|
15
|
Classification of cervical cells leveraging simultaneous super-resolution and ordinal regression. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108208] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
16
|
Meng N, Cheung JP, Wong KYK, Dokos S, Li S, Choy RW, To S, Li RJ, Zhang T. An artificial intelligence powered platform for auto-analyses of spine alignment irrespective of image quality with prospective validation. EClinicalMedicine 2022; 43:101252. [PMID: 35028544 PMCID: PMC8741432 DOI: 10.1016/j.eclinm.2021.101252] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 12/07/2021] [Accepted: 12/09/2021] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND Assessment of spine alignment is crucial in the management of scoliosis, but current auto-analysis of spine alignment suffers from low accuracy. We aim to develop and validate a hybrid model named SpineHRNet+, which integrates artificial intelligence (AI) and rule-based methods to improve auto-alignment reliability and interpretability. METHODS From December 2019 to November 2020, 1,542 consecutive patients with scoliosis attending two local scoliosis clinics (The Duchess of Kent Children's Hospital at Sandy Bay in Hong Kong; Queen Mary Hospital in Pok Fu Lam on Hong Kong Island) were recruited. The biplanar radiographs of each patient were collected with our medical machine EOS™. The collected radiographs were recaptured using smartphones or screenshots, with deidentified images securely stored. Manually labelled landmarks and alignment parameters by a spine surgeon were considered as ground truth (GT). The data were split 8:2 to train and internally test SpineHRNet+, respectively. This was followed by a prospective validation on another 337 patients. Quantitative analyses of landmark predictions were conducted, and reliabilities of auto-alignment were assessed using linear regression and Bland-Altman plots. Deformity severity and sagittal abnormality classifications were evaluated by confusion matrices. FINDINGS SpineHRNet+ achieved accurate landmark detection with mean Euclidean distance errors of 2·78 and 5·52 pixels on posteroanterior and lateral radiographs, respectively. The mean angle errors between predictions and GT were 3·18° and 6·32° coronally and sagittally. All predicted alignments were strongly correlated with GT (p < 0·001, R2 > 0·97), with minimal overall difference visualised via Bland-Altman plots. For curve detections, 95·7% sensitivity and 88·1% specificity was achieved, and for severity classification, 88·6-90·8% sensitivity was obtained. For sagittal abnormalities, greater than 85·2-88·9% specificity and sensitivity were achieved. INTERPRETATION The auto-analysis provided by SpineHRNet+ was reliable and continuous and it might offer the potential to assist clinical work and facilitate large-scale clinical studies. FUNDING RGC Research Impact Fund (R5017-18F), Innovation and Technology Fund (ITS/404/18), and the AOSpine East Asia Fund (AOSEA(R)2019-06).
Collapse
Affiliation(s)
- Nan Meng
- Digital Health Laboratory, Queen Mary Hospital, Li Ka Shing Faculty of Medicine, The University of Hong Kong, 5/F, Professorial Block, Pokfulam, Hong Kong, China
| | - Jason P.Y. Cheung
- Digital Health Laboratory, Queen Mary Hospital, Li Ka Shing Faculty of Medicine, The University of Hong Kong, 5/F, Professorial Block, Pokfulam, Hong Kong, China
| | - Kwan-Yee K. Wong
- Department of Computer Science, The University of Hong Kong, Pokfulam, Hong Kong, China
| | - Socrates Dokos
- Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| | - Sofia Li
- Digital Health Laboratory, Queen Mary Hospital, Li Ka Shing Faculty of Medicine, The University of Hong Kong, 5/F, Professorial Block, Pokfulam, Hong Kong, China
| | - Richard W. Choy
- Digital Health Laboratory, Queen Mary Hospital, Li Ka Shing Faculty of Medicine, The University of Hong Kong, 5/F, Professorial Block, Pokfulam, Hong Kong, China
| | - Samuel To
- Digital Health Laboratory, Queen Mary Hospital, Li Ka Shing Faculty of Medicine, The University of Hong Kong, 5/F, Professorial Block, Pokfulam, Hong Kong, China
| | - Ricardo J. Li
- Digital Health Laboratory, Queen Mary Hospital, Li Ka Shing Faculty of Medicine, The University of Hong Kong, 5/F, Professorial Block, Pokfulam, Hong Kong, China
| | - Teng Zhang
- Digital Health Laboratory, Queen Mary Hospital, Li Ka Shing Faculty of Medicine, The University of Hong Kong, 5/F, Professorial Block, Pokfulam, Hong Kong, China
| |
Collapse
|
17
|
Joint segmentation and classification task via adversarial network: Application to HEp-2 cell images. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108156] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
18
|
Hillsley A, Santos JE, Rosales AM. A deep learning approach to identify and segment alpha-smooth muscle actin stress fiber positive cells. Sci Rep 2021; 11:21855. [PMID: 34750438 PMCID: PMC8575943 DOI: 10.1038/s41598-021-01304-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Accepted: 10/26/2021] [Indexed: 01/04/2023] Open
Abstract
Cardiac fibrosis is a pathological process characterized by excessive tissue deposition, matrix remodeling, and tissue stiffening, which eventually leads to organ failure. On a cellular level, the development of fibrosis is associated with the activation of cardiac fibroblasts into myofibroblasts, a highly contractile and secretory phenotype. Myofibroblasts are commonly identified in vitro by the de novo assembly of alpha-smooth muscle actin stress fibers; however, there are few methods to automate stress fiber identification, which can lead to subjectivity and tedium in the process. To address this limitation, we present a computer vision model to classify and segment cells containing alpha-smooth muscle actin stress fibers into 2 classes (α-SMA SF+ and α-SMA SF-), with a high degree of accuracy (cell accuracy: 77%, F1 score 0.79). The model combines standard image processing methods with deep learning techniques to achieve semantic segmentation of the different cell phenotypes. We apply this model to cardiac fibroblasts cultured on hyaluronic acid-based hydrogels of various moduli to induce alpha-smooth muscle actin stress fiber formation. The model successfully predicts the same trends in stress fiber identification as obtained with a manual analysis. Taken together, this work demonstrates a process to automate stress fiber identification in in vitro fibrotic models, thereby increasing reproducibility in fibroblast phenotypic characterization.
Collapse
Affiliation(s)
- Alexander Hillsley
- McKetta Department of Chemical Engineering, University of Texas at Austin, Austin, TX, USA
| | - Javier E Santos
- Hildebrand Department of Petroleum and Geosystems Engineering, University of Texas at Austin, Austin, TX, USA
| | - Adrianne M Rosales
- McKetta Department of Chemical Engineering, University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
19
|
Chattoraj S, Chakraborty A, Gupta A, Vishwakarma Y, Vishwakarma K, Aparajeeta J. Deep Phenotypic Cell Classification using Capsule Neural Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4031-4036. [PMID: 34892115 DOI: 10.1109/embc46164.2021.9629862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recent developments in ultra-high-throughput microscopy have created a new generation of cell classification methodologies focused solely on image-based cell phenotypes. These image-based analyses enable morphological profiling and screening of thousands or even millions of single cells at a fraction of the cost. They have been shown to demonstrate the statistical significance required for understanding the role of cell heterogeneity in diverse biologists. However, these single-cell analysis techniques are slow and require expensive genetic/epigenetic analysis. This treatise proposes an innovative DL system based on the newly created capsule networks (CapsNet) architecture. The proposed deep CapsNet model employs "Capsules" for high-level feature abstraction relevant to the cell category. Experiments demonstrate that our proposed system can accurately classify different types of cells based on phenotypic label-free bright-field images with over 98.06% accuracy and that deep CapsNet models outperform CNN models in the prior art.
Collapse
|
20
|
Maurya R, Pathak VK, Dutta MK. Deep learning based microscopic cell images classification framework using multi-level ensemble. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106445. [PMID: 34627021 DOI: 10.1016/j.cmpb.2021.106445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Accepted: 09/26/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Advancement of the ultra-fast microscopic images acquisition and generation techniques give rise to the automated artificial intelligence (AI)-based microscopic images classification systems. The earlier cell classification systems classify the cell images of a specific type captured using a specific microscopy technique, therefore the motivation behind the present study is to develop a generic framework that can be used for the classification of cell images of multiple types captured using a variety of microscopic techniques. METHODS The proposed framework for microscopic cell images classification is based on the transfer learning-based multi-level ensemble approach. The ensemble is made by training the same base model with different optimisation methods and different learning rates. An important contribution of the proposed framework lies in its ability to capture different granularities of features extracted from multiple scales of an input microscopic cell image. The base learners used in the proposed ensemble encapsulates the aggregation of low-level coarse features and high-level semantic features, thus, represent the different granular microscopic cell image features present at different scales of input cell images. The batch normalisation layer has been added to the base models for the fast convergence in the proposed ensemble for microscopic cell images classification. RESULTS The general applicability of the proposed framework for microscopic cell image classification has been tested with five different public datasets. The proposed method has outperformed the experimental results obtained in several other similar works. CONCLUSIONS The proposed framework for microscopic cell classification outperforms the other state-of-the-art classification methods in the same domain with a comparatively lesser amount of training data.
Collapse
Affiliation(s)
- Ritesh Maurya
- Centre for Advanced Studies, Dr. A.P.J. Abdul Kalam Technical University, Lucknow, India.
| | | | - Malay Kishore Dutta
- Centre for Advanced Studies, Dr. A.P.J. Abdul Kalam Technical University, Lucknow, India.
| |
Collapse
|
21
|
Chen D, Wang Z, Chen K, Zeng Q, Wang L, Xu X, Liang J, Chen X. Classification of unlabeled cells using lensless digital holographic images and deep neural networks. Quant Imaging Med Surg 2021; 11:4137-4148. [PMID: 34476194 DOI: 10.21037/qims-21-16] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 05/08/2021] [Indexed: 11/06/2022]
Abstract
Background Image-based cell analytic methodologies offer a relatively simple and economical way to analyze and understand cell heterogeneities and developments. Owing to developments in high-resolution image sensors and high-performance computation processors, the emerging lensless digital holography technique enables a simple and cost-effective approach to obtain label-free cell images with a large field of view and microscopic spatial resolution. Methods The holograms of three types of cells, including MCF-10A, EC-109, and MDA-MB-231 cells, were recorded using a lensless digital holography system composed of a laser diode, a sample stage, an image sensor, and a laptop computer. The amplitude images were reconstructed using the angular spectrum method, and the sample to sensor distance was determined using the autofocusing criteria based on the sparsity of image edges and corner points. Four convolutional neural networks (CNNs) were used to classify the cell types based on the recovered holographic images. Results Classification of two cell types and three cell types achieved an accuracy of higher than 91% by all the networks used. The ResNet and the DenseNet models had similar classification accuracy of 95% or greater, outperforming the GoogLeNet and the CNN-5 models. Conclusions These experiments demonstrated that the CNNs were effective at classifying two or three types of tumor cells. The lensless holography combined with machine learning holds great promise in the application of stainless cell imaging and classification, such as in cancer diagnosis and cancer biology research, where distinguishing normal cells from cancer cells and recognizing different cancer cell types will be greatly beneficial.
Collapse
Affiliation(s)
- Duofang Chen
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Zhaohui Wang
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Kai Chen
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Qi Zeng
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Lin Wang
- School of Computer Science, Xi'an Polytechnic University, Xi'an, China
| | - Xinyi Xu
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Jimin Liang
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Xueli Chen
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| |
Collapse
|
22
|
Suzuki G, Saito Y, Seki M, Evans-Yamamoto D, Negishi M, Kakoi K, Kawai H, Landry CR, Yachie N, Mitsuyama T. Machine learning approach for discrimination of genotypes based on bright-field cellular images. NPJ Syst Biol Appl 2021; 7:31. [PMID: 34290253 PMCID: PMC8295336 DOI: 10.1038/s41540-021-00190-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 07/01/2021] [Indexed: 12/19/2022] Open
Abstract
Morphological profiling is a combination of established optical microscopes and cutting-edge machine vision technologies, which stacks up successful applications in high-throughput phenotyping. One major question is how much information can be extracted from an image to identify genetic differences between cells. While fluorescent microscopy images of specific organelles have been broadly used for single-cell profiling, the potential ability of bright-field (BF) microscopy images of label-free cells remains to be tested. Here, we examine whether single-gene perturbation can be discriminated based on BF images of label-free cells using a machine learning approach. We acquired hundreds of BF images of single-gene mutant cells, quantified single-cell profiles consisting of texture features of cellular regions, and constructed a machine learning model to discriminate mutant cells from wild-type cells. Interestingly, the mutants were successfully discriminated from the wild type (area under the receiver operating characteristic curve = 0.773). The features that contributed to the discrimination were identified, and they included those related to the morphology of structures that appeared within cellular regions. Furthermore, functionally close gene pairs showed similar feature profiles of the mutant cells. Our study reveals that single-gene mutant cells can be discriminated from wild-type cells based on BF images, suggesting the potential as a useful tool for mutant cell profiling.
Collapse
Affiliation(s)
- Godai Suzuki
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tokyo, 135-0064, Japan
| | - Yutaka Saito
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tokyo, 135-0064, Japan
- AIST-Waseda University Computational Bio Big-Data Open Innovation Laboratory (CBBD-OIL), Tokyo, 169-8555, Japan
- Graduate School of Frontier Sciences, The University of Tokyo, Chiba, 277-8561, Japan
| | - Motoaki Seki
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, 153-8904, Japan
| | - Daniel Evans-Yamamoto
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, 153-8904, Japan
- Institute for Advanced Biosciences, Keio University, Tsuruoka, 997-0035, Japan
- Systems Biology Program, Graduate School of Media and Governance, Keio University, Fujisawa, 252-0882, Japan
| | - Mikiko Negishi
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, 153-8904, Japan
| | - Kentaro Kakoi
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, 153-8904, Japan
| | - Hiroki Kawai
- Research and Development Department, LPIXEL Inc., Tokyo, 100-0004, Japan
| | - Christian R Landry
- Institut de Biologie Intégrative et des Systémes, Université Laval, Québec, QC, G1V 0A6, Canada
- Département de Biochimie, Microbiologie et Bio-informatique, Faculté de sciences et génie, Université Laval, Québec, QC, G1V 0A6, Canada
- PROTEO, le regroupement québécois de recherche sur la fonction, l'ingénierie et les applications des protéines, Université Laval, Québec, QC, G1V 0A6, Canada
- Centre de Recherche en Données Massives (CRDM), Université Laval, Québec, QC, G1V 0A6, Canada
- Département de Biologie, Faculté des sciences et de Génie, Université Laval, Québec, QC, G1V 0A6, Canada
| | - Nozomu Yachie
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, 153-8904, Japan.
- Institute for Advanced Biosciences, Keio University, Tsuruoka, 997-0035, Japan.
- Systems Biology Program, Graduate School of Media and Governance, Keio University, Fujisawa, 252-0882, Japan.
- School of Biomedical Engineering, The University of British Columbia, Vancouver, V6T1Z3, Canada.
| | - Toutai Mitsuyama
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tokyo, 135-0064, Japan.
| |
Collapse
|
23
|
Tewary S, Mukhopadhyay S. HER2 Molecular Marker Scoring Using Transfer Learning and Decision Level Fusion. J Digit Imaging 2021; 34:667-677. [PMID: 33742331 PMCID: PMC8329150 DOI: 10.1007/s10278-021-00442-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 01/13/2021] [Accepted: 03/01/2021] [Indexed: 01/28/2023] Open
Abstract
In prognostic evaluation of breast cancer, immunohistochemical (IHC) marker human epidermal growth factor receptor 2 (HER2) is used for prognostic evaluation. Accurate assessment of HER2-stained tissue sample is essential in therapeutic decision making for the patients. In regular clinical settings, expert pathologists assess the HER2-stained tissue slide under microscope for manual scoring based on prior experience. Manual scoring is time consuming, tedious, and often prone to inter-observer variation among group of pathologists. With the recent advancement in the area of computer vision and deep learning, medical image analysis has got significant attention. A number of deep learning architectures have been proposed for classification of different image groups. These networks are also used for transfer learning to classify other image classes. In the presented study, a number of transfer learning architectures are used for HER2 scoring. Five pre-trained architectures viz. VGG16, VGG19, ResNet50, MobileNetV2, and NASNetMobile with decimating the fully connected layers to get 3-class classification have been used for the comparative assessment of the networks as well as further scoring of stained tissue sample image based on statistical voting using mode operator. HER2 Challenge dataset from Warwick University is used in this study. A total of 2130 image patches were extracted to generate the training dataset from 300 training images corresponding to 30 training cases. The output model is then tested on 800 new test image patches from 100 test images acquired from 10 test cases (different from training cases) to report the outcome results. The transfer learning models have shown significant accuracy with VGG19 showing the best accuracy for the test images. The accuracy is found to be 93%, which increases to 98% on the image-based scoring using statistical voting mechanism. The output shows a capable quantification pipeline in automated HER2 score generation.
Collapse
Affiliation(s)
- Suman Tewary
- School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur, India
- Computational Instrumentation, CSIR-Central Scientific Instruments Organisation, Chandigarh, India
| | - Sudipta Mukhopadhyay
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur, India.
| |
Collapse
|
24
|
Liu Z, Jin L, Chen J, Fang Q, Ablameyko S, Yin Z, Xu Y. A survey on applications of deep learning in microscopy image analysis. Comput Biol Med 2021; 134:104523. [PMID: 34091383 DOI: 10.1016/j.compbiomed.2021.104523] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 05/13/2021] [Accepted: 05/17/2021] [Indexed: 01/12/2023]
Abstract
Advanced microscopy enables us to acquire quantities of time-lapse images to visualize the dynamic characteristics of tissues, cells or molecules. Microscopy images typically vary in signal-to-noise ratios and include a wealth of information which require multiple parameters and time-consuming iterative algorithms for processing. Precise analysis and statistical quantification are often needed for the understanding of the biological mechanisms underlying these dynamic image sequences, which has become a big challenge in the field. As deep learning technologies develop quickly, they have been applied in bioimage processing more and more frequently. Novel deep learning models based on convolution neural networks have been developed and illustrated to achieve inspiring outcomes. This review article introduces the applications of deep learning algorithms in microscopy image analysis, which include image classification, region segmentation, object tracking and super-resolution reconstruction. We also discuss the drawbacks of existing deep learning-based methods, especially on the challenges of training datasets acquisition and evaluation, and propose the potential solutions. Furthermore, the latest development of augmented intelligent microscopy that based on deep learning technology may lead to revolution in biomedical research.
Collapse
Affiliation(s)
- Zhichao Liu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China
| | - Luhong Jin
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China
| | - Jincheng Chen
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China
| | - Qiuyu Fang
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China
| | - Sergey Ablameyko
- National Academy of Sciences, United Institute of Informatics Problems, Belarusian State University, Minsk, 220012, Belarus
| | - Zhaozheng Yin
- AI Institute, Department of Biomedical Informatics and Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Yingke Xu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Department of Endocrinology, The Affiliated Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, 310016, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China.
| |
Collapse
|
25
|
Luo S, Shi Y, Chin LK, Zhang Y, Wen B, Sun Y, Nguyen BTT, Chierchia G, Talbot H, Bourouina T, Jiang X, Liu AQ. Rare bioparticle detection via deep metric learning. RSC Adv 2021; 11:17603-17610. [PMID: 35480202 PMCID: PMC9032704 DOI: 10.1039/d1ra02869c] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 05/07/2021] [Indexed: 11/21/2022] Open
Abstract
Recent deep neural networks have shown superb performance in analyzing bioimages for disease diagnosis and bioparticle classification. Conventional deep neural networks use simple classifiers such as SoftMax to obtain highly accurate results. However, they have limitations in many practical applications that require both low false alarm rate and high recovery rate, e.g., rare bioparticle detection, in which the representative image data is hard to collect, the training data is imbalanced, and the input images in inference time could be different from the training images. Deep metric learning offers a better generatability by using distance information to model the similarity of the images and learning function maps from image pixels to a latent space, playing a vital role in rare object detection. In this paper, we propose a robust model based on a deep metric neural network for rare bioparticle (Cryptosporidium or Giardia) detection in drinking water. Experimental results showed that the deep metric neural network achieved a high accuracy of 99.86% in classification, 98.89% in precision rate, 99.16% in recall rate and zero false alarm rate. The reported model empowers imaging flow cytometry with capabilities of biomedical diagnosis, environmental monitoring, and other biosensing applications.
Collapse
Affiliation(s)
- Shaobo Luo
- ESYCOM, CNRS UMR 9007, Universite Gustave Eiffel, ESIEE Paris Noisy-le-Grand 93162 France .,Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (ASTAR) 138668 Singapore
| | - Yuzhi Shi
- School of Electrical & Electronic Engineering, Nanyang Technological University 639798 Singapore
| | - Lip Ket Chin
- School of Electrical & Electronic Engineering, Nanyang Technological University 639798 Singapore .,Center for Systems Biology, Massachusetts General Hospital Massachusetts 02114 USA
| | - Yi Zhang
- School of Mechanical & Aerospace Engineering, Nanyang Technological University 639798 Singapore
| | - Bihan Wen
- School of Electrical & Electronic Engineering, Nanyang Technological University 639798 Singapore
| | - Ying Sun
- Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (ASTAR) 138668 Singapore
| | - Binh T T Nguyen
- School of Electrical & Electronic Engineering, Nanyang Technological University 639798 Singapore
| | - Giovanni Chierchia
- ESYCOM, CNRS UMR 9007, Universite Gustave Eiffel, ESIEE Paris Noisy-le-Grand 93162 France
| | - Hugues Talbot
- CentraleSupelec, Universite Paris-Saclay Saint-Aubin 91190 France
| | - Tarik Bourouina
- ESYCOM, CNRS UMR 9007, Universite Gustave Eiffel, ESIEE Paris Noisy-le-Grand 93162 France
| | - Xudong Jiang
- School of Electrical & Electronic Engineering, Nanyang Technological University 639798 Singapore
| | - Ai-Qun Liu
- School of Electrical & Electronic Engineering, Nanyang Technological University 639798 Singapore .,Nanyang Environment and Water Research Institute, Nanyang Technological University 637141 Singapore
| |
Collapse
|
26
|
A CNN-based unified framework utilizing projection loss in unison with label noise handling for multiple Myeloma cancer diagnosis. Med Image Anal 2021; 72:102099. [PMID: 34098240 DOI: 10.1016/j.media.2021.102099] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 04/26/2021] [Accepted: 04/27/2021] [Indexed: 01/16/2023]
Abstract
Multiple Myeloma (MM) is a malignancy of plasma cells. Similar to other forms of cancer, it demands prompt diagnosis for reducing the risk of mortality. The conventional diagnostic tools are resource-intense and hence, these solutions are not easily scalable for extending their reach to the masses. Advancements in deep learning have led to rapid developments in affordable, resource optimized, easily deployable computer-assisted solutions. This work proposes a unified framework for MM diagnosis using microscopic blood cell imaging data that addresses the key challenges of inter-class visual similarity of healthy versus cancer cells and that of the label noise of the dataset. To extract class distinctive features, we propose projection loss to maximize the projection of a sample's activation on the respective class vector besides imposing orthogonality constraints on the class vectors. This projection loss is used along with the cross-entropy loss to design a dual branch architecture that helps achieve improved performance and provides scope for targeting the label noise problem. Based on this architecture, two methodologies have been proposed to correct the noisy labels. A coupling classifier has also been proposed to resolve the conflicts in the dual-branch architecture's predictions. We have utilized a large dataset of 72 subjects (26 healthy and 46 MM cancer) containing a total of 74996 images (including 34555 training cell images and 40441 test cell images). This is so far the most extensive dataset on Multiple Myeloma cancer ever reported in the literature. An ablation study has also been carried out. The proposed architecture performs best with a balanced accuracy of 94.17% on binary cell classification of healthy versus cancer in the comparative performance with ten state-of-the-art architectures. Extensive experiments on two additional publicly available datasets of two different modalities have also been utilized for analyzing the label noise handling capability of the proposed methodology. The code will be available under https://github.com/shivgahlout/CAD-MM.
Collapse
|
27
|
Lee KCM, Guck J, Goda K, Tsia KK. Toward deep biophysical cytometry: prospects and challenges. Trends Biotechnol 2021; 39:1249-1262. [PMID: 33895013 DOI: 10.1016/j.tibtech.2021.03.006] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 03/15/2021] [Accepted: 03/15/2021] [Indexed: 12/13/2022]
Abstract
The biophysical properties of cells reflect their identities, underpin their homeostatic state in health, and define the pathogenesis of disease. Recent leapfrogging advances in biophysical cytometry now give access to this information, which is obscured in molecular assays, with a discriminative power that was once inconceivable. However, biophysical cytometry should go 'deeper' in terms of exploiting the information-rich cellular biophysical content, generating a molecular knowledge base of cellular biophysical properties, and standardizing the protocols for wider dissemination. Overcoming these barriers, which requires concurrent innovations in microfluidics, optical imaging, and computer vision, could unleash the enormous potential of biophysical cytometry not only for gaining a new mechanistic understanding of biological systems but also for identifying new cost-effective biomarkers of disease.
Collapse
Affiliation(s)
- Kelvin C M Lee
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong
| | - Jochen Guck
- Max Planck Institute for the Science of Light, and Max-Planck-Zentrum für Physik und Medizin, 91058 Erlangen, Germany; Department of Physics, Friedrich-Alexander Universität Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - Keisuke Goda
- Department of Chemistry, The University of Tokyo, Tokyo 113-0033, Japan; Institute of Technological Sciences, Wuhan University, Hubei 430072, China; Department of Bioengineering, University of California, Los Angeles, California 90095, USA
| | - Kevin K Tsia
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong; Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong.
| |
Collapse
|
28
|
Meng N, Li K, Liu J, Lam EY. Light Field View Synthesis via Aperture Disparity and Warping Confidence Map. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3908-3921. [PMID: 33750690 DOI: 10.1109/tip.2021.3066293] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This paper presents a learning-based approach to synthesize the view from an arbitrary camera position given a sparse set of images. A key challenge for this novel view synthesis arises from the reconstruction process, when the views from different input images may not be consistent due to obstruction in the light path. We overcome this by jointly modeling the epipolar property and occlusion in designing a convolutional neural network. We start by defining and computing the aperture disparity map, which approximates the parallax and measures the pixel-wise shift between two views. While this relates to free-space rendering and can fail near the object boundaries, we further develop a warping confidence map to address pixel occlusion in these challenging regions. The proposed method is evaluated on diverse real-world and synthetic light field scenes, and it shows better performance over several state-of-the-art techniques.
Collapse
|
29
|
Luo S, Nguyen KT, Nguyen BTT, Feng S, Shi Y, Elsayed A, Zhang Y, Zhou X, Wen B, Chierchia G, Talbot H, Bourouina T, Jiang X, Liu AQ. Deep learning-enabled imaging flow cytometry for high-speed Cryptosporidium and Giardia detection. Cytometry A 2021; 99:1123-1133. [PMID: 33550703 DOI: 10.1002/cyto.a.24321] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 02/01/2021] [Accepted: 02/03/2021] [Indexed: 12/19/2022]
Abstract
Imaging flow cytometry has become a popular technology for bioparticle image analysis because of its capability of capturing thousands of images per second. Nevertheless, the vast number of images generated by imaging flow cytometry imposes great challenges for data analysis especially when the species have similar morphologies. In this work, we report a deep learning-enabled high-throughput system for predicting Cryptosporidium and Giardia in drinking water. This system combines imaging flow cytometry and an efficient artificial neural network called MCellNet, which achieves a classification accuracy >99.6%. The system can detect Cryptosporidium and Giardia with a sensitivity of 97.37% and a specificity of 99.95%. The high-speed analysis reaches 346 frames per second, outperforming the state-of-the-art deep learning algorithm MobileNetV2 in speed (251 frames per second) with a comparable classification accuracy. The reported system empowers rapid, accurate, and high throughput bioparticle detection in clinical diagnostics, environmental monitoring and other potential biosensing applications.
Collapse
Affiliation(s)
- Shaobo Luo
- ESIEE, Universite Paris-Est, Noisy-le-Grand Cedex, France.,Nanyang Environment and Water Research Institute, Nanyang Technological University, Singapore, Singapore
| | - Kim Truc Nguyen
- Nanyang Environment and Water Research Institute, Nanyang Technological University, Singapore, Singapore.,School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Binh T T Nguyen
- School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Shilun Feng
- Nanyang Environment and Water Research Institute, Nanyang Technological University, Singapore, Singapore.,School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Yuzhi Shi
- School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Ahmed Elsayed
- ESIEE, Universite Paris-Est, Noisy-le-Grand Cedex, France
| | - Yi Zhang
- School of Mechanical & Aerospace Engineering, Nanyang Technological University, Singapore, Singapore
| | - Xiaohong Zhou
- Research Centre of Environmental and Health Sensing Technology, School of Environment, Tsinghua University, Beijing, China
| | - Bihan Wen
- School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | | | - Hugues Talbot
- CentraleSupelec, Universite Paris-Saclay, Saint-Aubin, France
| | | | - Xudong Jiang
- School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Ai Qun Liu
- Nanyang Environment and Water Research Institute, Nanyang Technological University, Singapore, Singapore.,School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
30
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
31
|
Djawad YA, Kiely J, Luxton R. Classification of the mechanism of toxicity as applied to human cell line ECV304. Comput Methods Biomech Biomed Engin 2020; 24:933-944. [PMID: 33356573 DOI: 10.1080/10255842.2020.1861255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The objective of this study was to identify the pattern of cytotoxicity testing of the human cell line ECV304 using three techniques of an ensemble learning algorithm (bagging, boosting and stacking). The study of cell morphology of ECV304 cell line was conducted using impedimetric measurement. Three types of toxins were applied to the ECV304 cell line namely 1 mM hydrogen peroxide (H2O2), 5% dimethyl sulfoxide and 10 μg Saponin. The measurement was conducted using electrodes and lock-in amplifier to detect impedance changes during cytotoxicity testing within a frequency range 200 and 830 kHz. The results were analysed, processed and extracted using detrended fluctuation analysis to obtain characteristics and features of the cells when exposed to the each of the toxins. Three ensemble algorithms applied showed slightly different results on the performance for classifying the data set from the feature extraction that was performed. However, the results show that the cell reaction to the toxins could be classified.
Collapse
Affiliation(s)
- Yasser Abd Djawad
- Department of Electronics, Universitas Negeri Makassar, Makassar, Indonesia
| | | | | |
Collapse
|
32
|
Dong N, Zhao L, Wu C, Chang J. Inception v3 based cervical cell classification combined with artificially extracted features. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106311] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
33
|
Eastman AE, Guo S. The palette of techniques for cell cycle analysis. FEBS Lett 2020; 594:10.1002/1873-3468.13842. [PMID: 32441778 PMCID: PMC9261528 DOI: 10.1002/1873-3468.13842] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 04/20/2020] [Accepted: 05/08/2020] [Indexed: 12/13/2022]
Abstract
The cell division cycle is the generational period of cellular growth and propagation. Cell cycle progression needs to be highly regulated to preserve genomic fidelity while increasing cell number. In multicellular organisms, the cell cycle must also coordinate with cell fate specification during development and tissue homeostasis. Altered cell cycle dynamics play a central role also in a number of pathophysiological processes. Thus, extensive effort has been made to define the biochemical machineries that execute the cell cycle and their regulation, as well as implementing more sensitive and accurate cell cycle measurements. Here, we review the available techniques for cell cycle analysis, revisiting the assumptions behind conventional population-based measurements and discussing new tools to better address cell cycle heterogeneity in the single-cell era. We weigh the strengths, weaknesses, and trade-offs of methods designed to measure temporal aspects of the cell cycle. Finally, we discuss emerging techniques for capturing cell cycle speed at single-cell resolution in live animals.
Collapse
Affiliation(s)
- Anna E Eastman
- Department of Cell Biology and Yale Stem Cell Center, Yale University, New Haven, CT, USA
| | - Shangqin Guo
- Department of Cell Biology and Yale Stem Cell Center, Yale University, New Haven, CT, USA
| |
Collapse
|
34
|
Nagao Y, Sakamoto M, Chinen T, Okada Y, Takao D. Robust classification of cell cycle phase and biological feature extraction by image-based deep learning. Mol Biol Cell 2020; 31:1346-1354. [PMID: 32320349 PMCID: PMC7353138 DOI: 10.1091/mbc.e20-03-0187] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Across the cell cycle, the subcellular organization undergoes major spatiotemporal changes that could in principle contain biological features that could potentially represent cell cycle phase. We applied convolutional neural network-based classifiers to extract such putative features from the fluorescence microscope images of cells stained for the nucleus, the Golgi apparatus, and the microtubule cytoskeleton. We demonstrate that cell images can be robustly classified according to G1/S and G2 cell cycle phases without the need for specific cell cycle markers. Grad-CAM analysis of the classification models enabled us to extract several pairs of quantitative parameters of specific subcellular features as good classifiers for the cell cycle phase. These results collectively demonstrate that machine learning-based image processing is useful to extract biological features underlying cellular phenomena of interest in an unbiased and data-driven manner.
Collapse
Affiliation(s)
- Yukiko Nagao
- Faculty of Pharmaceutical Sciences, The University of Tokyo, Tokyo 113-0033, Japan
| | - Mika Sakamoto
- Genome Informatics Laboratory, National Institute of Genetics, Mishima 411-8540, Japan
| | - Takumi Chinen
- Faculty of Pharmaceutical Sciences, The University of Tokyo, Tokyo 113-0033, Japan
| | - Yasushi Okada
- Department of Cell Biology and Anatomy and International Research Center for Neurointelligence (WPI-IRCN), Graduate School of Medicine, The University of Tokyo, Tokyo 113-0033, Japan.,Department of Physics and Universal Biology Institute (UBI), Graduate School of Science, The University of Tokyo, Tokyo 113-0033, Japan.,Laboratory for Cell Polarity Regulation, Center for Biosystems Dynamics Research (BDR), RIKEN, Osaka 565-0874, Japan
| | - Daisuke Takao
- Department of Cell Biology and Anatomy and International Research Center for Neurointelligence (WPI-IRCN), Graduate School of Medicine, The University of Tokyo, Tokyo 113-0033, Japan
| |
Collapse
|
35
|
Automatic Identification of Breast Ultrasound Image Based on Supervised Block-Based Region Segmentation Algorithm and Features Combination Migration Deep Learning Model. IEEE J Biomed Health Inform 2020; 24:984-993. [DOI: 10.1109/jbhi.2019.2960821] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
36
|
Sun J, Tárnok A, Su X. Deep Learning-Based Single-Cell Optical Image Studies. Cytometry A 2020; 97:226-240. [PMID: 31981309 DOI: 10.1002/cyto.a.23973] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Revised: 01/03/2020] [Accepted: 01/10/2020] [Indexed: 12/17/2022]
Abstract
Optical imaging technology that has the advantages of high sensitivity and cost-effectiveness greatly promotes the progress of nondestructive single-cell studies. Complex cellular image analysis tasks such as three-dimensional reconstruction call for machine-learning technology in cell optical image research. With the rapid developments of high-throughput imaging flow cytometry, big data cell optical images are always obtained that may require machine learning for data analysis. In recent years, deep learning has been prevalent in the field of machine learning for large-scale image processing and analysis, which brings a new dawn for single-cell optical image studies with an explosive growth of data availability. Popular deep learning techniques offer new ideas for multimodal and multitask single-cell optical image research. This article provides an overview of the basic knowledge of deep learning and its applications in single-cell optical image studies. We explore the feasibility of applying deep learning techniques to single-cell optical image analysis, where popular techniques such as transfer learning, multimodal learning, multitask learning, and end-to-end learning have been reviewed. Image preprocessing and deep learning model training methods are then summarized. Applications based on deep learning techniques in the field of single-cell optical image studies are reviewed, which include image segmentation, super-resolution image reconstruction, cell tracking, cell counting, cross-modal image reconstruction, and design and control of cell imaging systems. In addition, deep learning in popular single-cell optical imaging techniques such as label-free cell optical imaging, high-content screening, and high-throughput optical imaging cytometry are also mentioned. Finally, the perspectives of deep learning technology for single-cell optical image analysis are discussed. © 2020 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Jing Sun
- Institute of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, 250061, China
| | - Attila Tárnok
- Department of Therapy Validation, Fraunhofer Institute for Cell Therapy and Immunology (IZI), Leipzig, Germany.,Institute for Medical Informatics, Statistics and Epidemiology (IMISE), University of Leipzig, Leipzig, Germany
| | - Xuantao Su
- Institute of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, 250061, China
| |
Collapse
|
37
|
Immunology Driven by Large-Scale Single-Cell Sequencing. Trends Immunol 2019; 40:1011-1021. [DOI: 10.1016/j.it.2019.09.004] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Revised: 09/18/2019] [Accepted: 09/18/2019] [Indexed: 12/28/2022]
|
38
|
Shi R, Wong JSJ, Lam EY, Tsia KK, So HKH. A Real-Time Coprime Line Scan Super-Resolution System for Ultra-Fast Microscopy. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:781-792. [PMID: 31059454 DOI: 10.1109/tbcas.2019.2914946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
A fundamental technical challenge for ultra-fast cell microscopy is the tradeoff between imaging throughput and resolution. In addition to throughput, real-time applications such as image-based cell sorting further requires ultra-low imaging latency to facilitate rapid decision making on a single-cell level. Using a novel coprime line scan sampling scheme, a real-time low-latency hardware super-resolution system for ultra-fast time-stretch microscopy is presented. The proposed scheme utilizes analog-to-digital converter with a carefully tuned sampling pattern (shifted sampling grid) to enable super-resolution image reconstruction using line scan input from an optical front-end. A fully pipelined FPGA-based system is built to efficiently handle the real-time high-resolution image reconstruction process with the input subpixel samples while achieving minimal output latency. The proposed super-resolution sampling and reconstruction scheme is parametrizable and is readily applicable to different line scan imaging systems. In our experiments, an imaging latency of 0.29 μs has been achieved based on a pixel-stream throughput of 4.123 giga pixels per second, which translates into imaging throughput of approximately 120000 cells per second.
Collapse
|