1
|
Carreras-Puigvert J, Spjuth O. Artificial intelligence for high content imaging in drug discovery. Curr Opin Struct Biol 2024; 87:102842. [PMID: 38797109 DOI: 10.1016/j.sbi.2024.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/28/2024] [Accepted: 04/29/2024] [Indexed: 05/29/2024]
Abstract
Artificial intelligence (AI) and high-content imaging (HCI) are contributing to advancements in drug discovery, propelled by the recent progress in deep neural networks. This review highlights AI's role in analysis of HCI data from fixed and live-cell imaging, enabling novel label-free and multi-channel fluorescent screening methods, and improving compound profiling. HCI experiments are rapid and cost-effective, facilitating large data set accumulation for AI model training. However, the success of AI in drug discovery also depends on high-quality data, reproducible experiments, and robust validation to ensure model performance. Despite challenges like the need for annotated compounds and managing vast image data, AI's potential in phenotypic screening and drug profiling is significant. Future improvements in AI, including increased interpretability and integration of multiple modalities, are expected to solidify AI and HCI's role in drug discovery.
Collapse
Affiliation(s)
- Jordi Carreras-Puigvert
- Department of Pharmaceutical Biosciences and Science for Life Laboratories, Uppsala University, Sweden.
| | - Ola Spjuth
- Department of Pharmaceutical Biosciences and Science for Life Laboratories, Uppsala University, Sweden.
| |
Collapse
|
2
|
Ivanov IE, Hirata-Miyasaki E, Chandler T, Cheloor-Kovilakam R, Liu Z, Pradeep S, Liu C, Bhave M, Khadka S, Arias C, Leonetti MD, Huang B, Mehta SB. Mantis: high-throughput 4D imaging and analysis of the molecular and physical architecture of cells. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.19.572435. [PMID: 38187521 PMCID: PMC10769231 DOI: 10.1101/2023.12.19.572435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
High-throughput dynamic imaging of cells and organelles is essential for understanding complex cellular responses. We report Mantis, a high-throughput 4D microscope that integrates two complementary, gentle, live-cell imaging technologies: remote-refocus label-free microscopy and oblique light-sheet fluorescence microscopy. Additionally, we report shrimPy, an open-source software for high-throughput imaging, deconvolution, and single-cell phenotyping of 4D data. Using Mantis and shrimPy, we achieved high-content correlative imaging of molecular dynamics and the physical architecture of 20 cell lines every 15 minutes over 7.5 hours. This platform also facilitated detailed measurements of the impacts of viral infection on the architecture of host cells and host proteins. The Mantis platform can enable high-throughput profiling of intracellular dynamics, long-term imaging and analysis of cellular responses to perturbations, and live-cell optical screens to dissect gene regulatory networks.
Collapse
Affiliation(s)
- Ivan E. Ivanov
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | | | - Talon Chandler
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | - Rasmi Cheloor-Kovilakam
- Department of Pharmaceutical Chemistry, University of California San Francisco, San Francisco, United States
| | - Ziwen Liu
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | - Soorya Pradeep
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | - Chad Liu
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | - Madhura Bhave
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | - Sudip Khadka
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | - Carolina Arias
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | | | - Bo Huang
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
- Department of Pharmaceutical Chemistry, University of California San Francisco, San Francisco, United States
| | - Shalin B. Mehta
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| |
Collapse
|
3
|
He S, Sillah M, Cole AR, Uboveja A, Aird KM, Chen YC, Gong YN. D-MAINS: A Deep-Learning Model for the Label-Free Detection of Mitosis, Apoptosis, Interphase, Necrosis, and Senescence in Cancer Cells. Cells 2024; 13:1004. [PMID: 38920634 PMCID: PMC11205186 DOI: 10.3390/cells13121004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 06/01/2024] [Accepted: 06/04/2024] [Indexed: 06/27/2024] Open
Abstract
BACKGROUND Identifying cells engaged in fundamental cellular processes, such as proliferation or living/death statuses, is pivotal across numerous research fields. However, prevailing methods relying on molecular biomarkers are constrained by high costs, limited specificity, protracted sample preparation, and reliance on fluorescence imaging. METHODS Based on cellular morphology in phase contrast images, we developed a deep-learning model named Detector of Mitosis, Apoptosis, Interphase, Necrosis, and Senescence (D-MAINS). RESULTS D-MAINS utilizes machine learning and image processing techniques, enabling swift and label-free categorization of cell death, division, and senescence at a single-cell resolution. Impressively, D-MAINS achieved an accuracy of 96.4 ± 0.5% and was validated with established molecular biomarkers. D-MAINS underwent rigorous testing under varied conditions not initially present in the training dataset. It demonstrated proficiency across diverse scenarios, encompassing additional cell lines, drug treatments, and distinct microscopes with different objective lenses and magnifications, affirming the robustness and adaptability of D-MAINS across multiple experimental setups. CONCLUSIONS D-MAINS is an example showcasing the feasibility of a low-cost, rapid, and label-free methodology for distinguishing various cellular states. Its versatility makes it a promising tool applicable across a broad spectrum of biomedical research contexts, particularly in cell death and oncology studies.
Collapse
Affiliation(s)
- Sarah He
- Department of Biological Sciences, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA;
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
| | - Muhammed Sillah
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
- Department of Immunology, University of Pittsburgh School of Medicine, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
| | - Aidan R. Cole
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
- Department of Pharmacology & Chemical Biology, University of Pittsburgh School of Medicine, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
| | - Apoorva Uboveja
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
- Department of Pharmacology & Chemical Biology, University of Pittsburgh School of Medicine, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
| | - Katherine M. Aird
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
- Department of Pharmacology & Chemical Biology, University of Pittsburgh School of Medicine, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
| | - Yu-Chih Chen
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
- Department of Computational and Systems Biology, University of Pittsburgh School of Medicine, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
- Department of Bioengineering, Swanson School of Engineering, University of Pittsburgh, 3700 O’Hara Street, Pittsburgh, PA 15260, USA
- CMU-Pitt Ph.D. Program in Computational Biology, University of Pittsburgh, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
| | - Yi-Nan Gong
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
- Department of Immunology, University of Pittsburgh School of Medicine, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
| |
Collapse
|
4
|
Katoh TA, Fukai YT, Ishibashi T. Optical microscopic imaging, manipulation, and analysis methods for morphogenesis research. Microscopy (Oxf) 2024; 73:226-242. [PMID: 38102756 PMCID: PMC11154147 DOI: 10.1093/jmicro/dfad059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 11/20/2023] [Accepted: 03/22/2024] [Indexed: 12/17/2023] Open
Abstract
Morphogenesis is a developmental process of organisms being shaped through complex and cooperative cellular movements. To understand the interplay between genetic programs and the resulting multicellular morphogenesis, it is essential to characterize the morphologies and dynamics at the single-cell level and to understand how physical forces serve as both signaling components and driving forces of tissue deformations. In recent years, advances in microscopy techniques have led to improvements in imaging speed, resolution and depth. Concurrently, the development of various software packages has supported large-scale, analyses of challenging images at the single-cell resolution. While these tools have enhanced our ability to examine dynamics of cells and mechanical processes during morphogenesis, their effective integration requires specialized expertise. With this background, this review provides a practical overview of those techniques. First, we introduce microscopic techniques for multicellular imaging and image analysis software tools with a focus on cell segmentation and tracking. Second, we provide an overview of cutting-edge techniques for mechanical manipulation of cells and tissues. Finally, we introduce recent findings on morphogenetic mechanisms and mechanosensations that have been achieved by effectively combining microscopy, image analysis tools and mechanical manipulation techniques.
Collapse
Affiliation(s)
- Takanobu A Katoh
- Department of Cell Biology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Yohsuke T Fukai
- Nonequilibrium Physics of Living Matter RIKEN Hakubi Research Team, RIKEN Center for Biosystems Dynamics Research, 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo 650-0047, Japan
| | - Tomoki Ishibashi
- Laboratory for Physical Biology, RIKEN Center for Biosystems Dynamics Research, 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo 650-0047, Japan
| |
Collapse
|
5
|
Perez-Lopez R, Ghaffari Laleh N, Mahmood F, Kather JN. A guide to artificial intelligence for cancer researchers. Nat Rev Cancer 2024; 24:427-441. [PMID: 38755439 DOI: 10.1038/s41568-024-00694-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/09/2024] [Indexed: 05/18/2024]
Abstract
Artificial intelligence (AI) has been commoditized. It has evolved from a specialty resource to a readily accessible tool for cancer researchers. AI-based tools can boost research productivity in daily workflows, but can also extract hidden information from existing data, thereby enabling new scientific discoveries. Building a basic literacy in these tools is useful for every cancer researcher. Researchers with a traditional biological science focus can use AI-based tools through off-the-shelf software, whereas those who are more computationally inclined can develop their own AI-based software pipelines. In this article, we provide a practical guide for non-computational cancer researchers to understand how AI-based tools can benefit them. We convey general principles of AI for applications in image analysis, natural language processing and drug discovery. In addition, we give examples of how non-computational researchers can get started on the journey to productively use AI in their own work.
Collapse
Affiliation(s)
- Raquel Perez-Lopez
- Radiomics Group, Vall d'Hebron Institute of Oncology, Vall d'Hebron Barcelona Hospital Campus, Barcelona, Spain
| | - Narmin Ghaffari Laleh
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany.
- Department of Medicine I, University Hospital Dresden, Dresden, Germany.
- Medical Oncology, National Center for Tumour Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
6
|
Ma J, Xie R, Ayyadhury S, Ge C, Gupta A, Gupta R, Gu S, Zhang Y, Lee G, Kim J, Lou W, Li H, Upschulte E, Dickscheid T, de Almeida JG, Wang Y, Han L, Yang X, Labagnara M, Gligorovski V, Scheder M, Rahi SJ, Kempster C, Pollitt A, Espinosa L, Mignot T, Middeke JM, Eckardt JN, Li W, Li Z, Cai X, Bai B, Greenwald NF, Van Valen D, Weisbart E, Cimini BA, Cheung T, Brück O, Bader GD, Wang B. The multimodality cell segmentation challenge: toward universal solutions. Nat Methods 2024; 21:1103-1113. [PMID: 38532015 PMCID: PMC11210294 DOI: 10.1038/s41592-024-02233-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyper-parameters in different experimental settings. Here, we present a multimodality cell segmentation benchmark, comprising more than 1,500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deep-learning algorithm that not only exceeds existing methods but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging.
Collapse
Affiliation(s)
- Jun Ma
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Ronald Xie
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
| | - Shamini Ayyadhury
- Donnelly Centre, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
| | - Cheng Ge
- School of Medicine and Pharmacy, Ocean University of China, Qingdao, China
| | - Anubha Gupta
- Department of Electronics and Communications Engineering, Indraprastha Institute of Information Technology Delhi (IIITD), New Delhi, India
| | - Ritu Gupta
- Laboratory Oncology Unit, Dr. BRAIRCH, All India Institute of Medical Sciences, New Delhi, India
| | - Song Gu
- Department of Image Reconstruction, Nanjing Anke Medical Technology Co., Nanjing, China
| | - Yao Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Gihun Lee
- Graduate School of AI, KAIST, Seoul, South Korea
| | - Joonkee Kim
- Graduate School of AI, KAIST, Seoul, South Korea
| | - Wei Lou
- Shenzhen Research Institute of Big Data, Shenzhen, China
- Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Haofeng Li
- Shenzhen Research Institute of Big Data, Shenzhen, China
| | - Eric Upschulte
- Institute of Neuroscience and Medicine (INM-1) and Helmholtz AI, Research Center Jülich, Jülich, Germany
| | - Timo Dickscheid
- Institute of Neuroscience and Medicine (INM-1) and Helmholtz AI, Research Center Jülich, Jülich, Germany
- Faculty of Mathematics and Natural Sciences - Institute of Computer Science, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - José Guilherme de Almeida
- European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, UK
- Champalimaud Foundation - Centre for the Unknown, Lisbon, Portugal
| | - Yixin Wang
- Department of Bioengineering, Stanford University, Palo Alto, CA, USA
| | - Lin Han
- Tandon School of Engineering, New York University, New York, NY, USA
| | - Xin Yang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Marco Labagnara
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Vojislav Gligorovski
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Maxime Scheder
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Sahand Jamal Rahi
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Carly Kempster
- School of Biological Sciences, University of Reading, Reading, UK
| | - Alice Pollitt
- School of Biological Sciences, University of Reading, Reading, UK
| | - Leon Espinosa
- Laboratoire de Chimie Bactérienne, CNRS-Université Aix-Marseille UMR, Institut de Microbiologie de la Méditerranée, Marseille, France
| | - Tâm Mignot
- Laboratoire de Chimie Bactérienne, CNRS-Université Aix-Marseille UMR, Institut de Microbiologie de la Méditerranée, Marseille, France
| | - Jan Moritz Middeke
- Department of Internal Medicine I, University Hospital Dresden, Technical University Dresden, Dresden, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Jan-Niklas Eckardt
- Department of Internal Medicine I, University Hospital Dresden, Technical University Dresden, Dresden, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Wangkai Li
- Department of Automation, University of Science and Technology of China, Hefei, China
| | - Zhaoyang Li
- Institute of Advanced Technology, University of Science and Technology of China, Hefei, China
| | - Xiaochen Cai
- Department of Computer Science and Technology, Nanjing University, Nanjing, China
| | - Bizhe Bai
- School of EECS, The University of Queensland, Brisbane, Queensland, Australia
| | | | - David Van Valen
- Division of Computing and Mathematical Science, Caltech, Pasadena, CA, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| | - Erin Weisbart
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Trevor Cheung
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Oscar Brück
- Hematoscope Laboratory, Comprehensive Cancer Center & Center of Diagnostics, Helsinki University Hospital, Helsinki, Finland
- Department of Oncology, University of Helsinki, Helsinki, Finland
| | - Gary D Bader
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
- Donnelly Centre, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, Ontario, Canada
- CIFAR Multiscale Human Program, CIFAR, Toronto, Ontario, Canada
| | - Bo Wang
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada.
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada.
- Vector Institute, Toronto, Ontario, Canada.
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada.
- UHN AI Hub, University Health Network, Toronto, Ontario, Canada.
| |
Collapse
|
7
|
Zargari A, Topacio BR, Mashhadi N, Shariati SA. Enhanced cell segmentation with limited training datasets using cycle generative adversarial networks. iScience 2024; 27:109740. [PMID: 38706861 PMCID: PMC11068845 DOI: 10.1016/j.isci.2024.109740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 02/20/2024] [Accepted: 04/10/2024] [Indexed: 05/07/2024] Open
Abstract
Deep learning is transforming bioimage analysis, but its application in single-cell segmentation is limited by the lack of large, diverse annotated datasets. We addressed this by introducing a CycleGAN-based architecture, cGAN-Seg, that enhances the training of cell segmentation models with limited annotated datasets. During training, cGAN-Seg generates annotated synthetic phase-contrast or fluorescent images with morphological details and nuances closely mimicking real images. This increases the variability seen by the segmentation model, enhancing the authenticity of synthetic samples and thereby improving predictive accuracy and generalization. Experimental results show that cGAN-Seg significantly improves the performance of widely used segmentation models over conventional training techniques. Our approach has the potential to accelerate the development of foundation models for microscopy image analysis, indicating its significance in advancing bioimage analysis with efficient training methodologies.
Collapse
Affiliation(s)
- Abolfazl Zargari
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Benjamin R. Topacio
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for The Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Najmeh Mashhadi
- Department of Computer Science and Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - S. Ali Shariati
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for The Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| |
Collapse
|
8
|
Han S, Phasouk K, Zhu J, Fong Y. Optimizing deep learning-based segmentation of densely packed cells using cell surface markers. BMC Med Inform Decis Mak 2024; 24:124. [PMID: 38750526 PMCID: PMC11094866 DOI: 10.1186/s12911-024-02502-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 04/08/2024] [Indexed: 05/19/2024] Open
Abstract
BACKGROUND Spatial molecular profiling depends on accurate cell segmentation. Identification and quantitation of individual cells in dense tissues, e.g. highly inflamed tissue caused by viral infection or immune reaction, remains a challenge. METHODS We first assess the performance of 18 deep learning-based cell segmentation models, either pre-trained or trained by us using two public image sets, on a set of immunofluorescence images stained with immune cell surface markers in skin tissue obtained during human herpes simplex virus (HSV) infection. We then further train eight of these models using up to 10,000+ training instances from the current image set. Finally, we seek to improve performance by tuning parameters of the most successful method from the previous step. RESULTS The best model before fine-tuning achieves a mean Average Precision (mAP) of 0.516. Prediction performance improves substantially after training. The best model is the cyto model from Cellpose. After training, it achieves an mAP of 0.694; with further parameter tuning, the mAP reaches 0.711. CONCLUSION Selecting the best model among the existing approaches and further training the model with images of interest produce the most gain in prediction performance. The performance of the resulting model compares favorably to human performance. The imperfection of the final model performance can be attributed to the moderate signal-to-noise ratio in the imageset.
Collapse
Affiliation(s)
- Sunwoo Han
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA
| | - Khamsone Phasouk
- Department of Laboratory Medicine and Pathology, University of Washington School of Medicine, Seattle, United States
| | - Jia Zhu
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA.
- Department of Laboratory Medicine and Pathology, University of Washington School of Medicine, Seattle, United States.
| | - Youyi Fong
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA.
| |
Collapse
|
9
|
Tong L, Corrigan A, Kumar NR, Hallbrook K, Orme J, Wang Y, Zhou H. CLANet: A comprehensive framework for cross-batch cell line identification using brightfield images. Med Image Anal 2024; 94:103123. [PMID: 38430651 DOI: 10.1016/j.media.2024.103123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 02/23/2024] [Accepted: 02/25/2024] [Indexed: 03/05/2024]
Abstract
Cell line authentication plays a crucial role in the biomedical field, ensuring researchers work with accurately identified cells. Supervised deep learning has made remarkable strides in cell line identification by studying cell morphological features through cell imaging. However, biological batch (bio-batch) effects, a significant issue stemming from the different times at which data is generated, lead to substantial shifts in the underlying data distribution, thus complicating reliable differentiation between cell lines from distinct batch cultures. To address this challenge, we introduce CLANet, a pioneering framework for cross-batch cell line identification using brightfield images, specifically designed to tackle three distinct bio-batch effects. We propose a cell cluster-level selection method to efficiently capture cell density variations, and a self-supervised learning strategy to manage image quality variations, thus producing reliable patch representations. Additionally, we adopt multiple instance learning(MIL) for effective aggregation of instance-level features for cell line identification. Our innovative time-series segment sampling module further enhances MIL's feature-learning capabilities, mitigating biases from varying incubation times across batches. We validate CLANet using data from 32 cell lines across 93 experimental bio-batches from the AstraZeneca Global Cell Bank. Our results show that CLANet outperforms related approaches (e.g. domain adaptation, MIL), demonstrating its effectiveness in addressing bio-batch effects in cell line identification.
Collapse
Affiliation(s)
- Lei Tong
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, UK; Data Sciences and Quantitative Biology, Discovery Sciences, AstraZeneca R&D, Cambridge, UK
| | - Adam Corrigan
- Data Sciences and Quantitative Biology, Discovery Sciences, AstraZeneca R&D, Cambridge, UK
| | - Navin Rathna Kumar
- UK Cell Culture and Banking, Discovery Sciences, AstraZeneca R&D, Alderley Park, UK
| | - Kerry Hallbrook
- UK Cell Culture and Banking, Discovery Sciences, AstraZeneca R&D, Alderley Park, UK
| | - Jonathan Orme
- UK Cell Culture and Banking, Discovery Sciences, AstraZeneca R&D, Cambridge, UK
| | - Yinhai Wang
- Data Sciences and Quantitative Biology, Discovery Sciences, AstraZeneca R&D, Cambridge, UK.
| | - Huiyu Zhou
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, UK.
| |
Collapse
|
10
|
Yakovlev EV, Simkin IV, Shirokova AA, Kolotieva NA, Novikova SV, Nasyrov AD, Denisenko IR, Gursky KD, Shishkov IN, Narzaeva DE, Salmina AB, Yurchenko SO, Kryuchkov NP. Machine learning approach for recognition and morphological analysis of isolated astrocytes in phase contrast microscopy. Sci Rep 2024; 14:9846. [PMID: 38684715 PMCID: PMC11059356 DOI: 10.1038/s41598-024-59773-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 04/15/2024] [Indexed: 05/02/2024] Open
Abstract
Astrocytes are glycolytically active cells in the central nervous system playing a crucial role in various brain processes from homeostasis to neurotransmission. Astrocytes possess a complex branched morphology, frequently examined by fluorescent microscopy. However, staining and fixation may impact the properties of astrocytes, thereby affecting the accuracy of the experimental data of astrocytes dynamics and morphology. On the other hand, phase contrast microscopy can be used to study astrocytes morphology without affecting them, but the post-processing of the resulting low-contrast images is challenging. The main result of this work is a novel approach for recognition and morphological analysis of unstained astrocytes based on machine-learning recognition of microscopic images. We conducted a series of experiments involving the cultivation of isolated astrocytes from the rat brain cortex followed by microscopy. Using the proposed approach, we tracked the temporal evolution of the average total length of branches, branching, and area per astrocyte in our experiments. We believe that the proposed approach and the obtained experimental data will be of interest and benefit to the scientific communities in cell biology, biophysics, and machine learning.
Collapse
Affiliation(s)
- Egor V Yakovlev
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia.
| | - Ivan V Simkin
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia
| | - Anastasiya A Shirokova
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia
| | - Nataliya A Kolotieva
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia
- Research Center of Neurology, 80 Volokolamskoye Shosse, Moscow, 125367, Russia
| | - Svetlana V Novikova
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia
- Research Center of Neurology, 80 Volokolamskoye Shosse, Moscow, 125367, Russia
| | - Artur D Nasyrov
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia
| | - Ilya R Denisenko
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia
| | - Konstantin D Gursky
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia
| | - Ivan N Shishkov
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia
| | - Diana E Narzaeva
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia
- Research Center of Neurology, 80 Volokolamskoye Shosse, Moscow, 125367, Russia
| | - Alla B Salmina
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia
- Research Center of Neurology, 80 Volokolamskoye Shosse, Moscow, 125367, Russia
| | - Stanislav O Yurchenko
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia
| | - Nikita P Kryuchkov
- Scientific-Educational Centre "Soft matter and physics of fluids", Bauman Moscow State Technical University, 2nd Baumanskaya Street 5, Moscow, 105005, Russia.
| |
Collapse
|
11
|
Ji Q, Yin T, Zhang P, Liu Q, Hou C. Study on Fine-Grained Visual Classification of Low-Resolution Urinary Erythrocyte. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01082-1. [PMID: 38622386 DOI: 10.1007/s10278-024-01082-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 03/06/2024] [Accepted: 03/08/2024] [Indexed: 04/17/2024]
Abstract
The morphological analysis test item of urine red blood cells is referred to as "extracorporeal renal biopsy," which holds significant importance for medical department testing. However, the accuracy of existing urine red blood cell morphology analyzers is suboptimal, and they are not widely utilized in medical examinations. Challenges include low image spatial resolution, blurred distinguishing features between cells, difficulty in fine-grained feature extraction, and insufficient data volume. This article aims to improve the classification accuracy of low-resolution urine red blood cells. This paper proposes a super-resolution method based on category-aware loss and an RBC-MIX data enhancement approach. It optimizes the cross-entropy loss to maximize the classification boundary and improve intra-class tightness and inter-class difference, achieving fine-grained classification of low-resolution urine red blood cells. Experimental outcomes demonstrate that with this method, an accuracy rate of 97.8% can be achieved for low-resolution urine red blood cell images. This algorithm attains outstanding classification performance for low-resolution urine red blood cells with only category labels required. This method can serve as a practical reference for urine red blood cell morphology examination items.
Collapse
Affiliation(s)
- Qingbo Ji
- College of information and Communication Engineering, Harbin Engineering University, Harbin, China
- Key Laboratory of Advanced Marine Communication and Information Technology, Ministry of Industry and Information Technology, Harbin Engineering University, Harbin, China
| | - Tingshuo Yin
- College of information and Communication Engineering, Harbin Engineering University, Harbin, China
- Key Laboratory of Advanced Marine Communication and Information Technology, Ministry of Industry and Information Technology, Harbin Engineering University, Harbin, China
| | - Pengfei Zhang
- College of information and Communication Engineering, Harbin Engineering University, Harbin, China
- Key Laboratory of Advanced Marine Communication and Information Technology, Ministry of Industry and Information Technology, Harbin Engineering University, Harbin, China
| | - Qingquan Liu
- Key Laboratory of Advanced Marine Communication and Information Technology, Ministry of Industry and Information Technology, Harbin Engineering University, Harbin, China
| | - Changbo Hou
- College of information and Communication Engineering, Harbin Engineering University, Harbin, China.
- Key Laboratory of Advanced Marine Communication and Information Technology, Ministry of Industry and Information Technology, Harbin Engineering University, Harbin, China.
| |
Collapse
|
12
|
Chen H, Murphy RF. 3DCellComposer - A Versatile Pipeline Utilizing 2D Cell Segmentation Methods for 3D Cell Segmentation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.08.584082. [PMID: 38559093 PMCID: PMC10979887 DOI: 10.1101/2024.03.08.584082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Background Cell segmentation is crucial in bioimage informatics, as its accuracy directly impacts conclusions drawn from cellular analyses. While many approaches to 2D cell segmentation have been described, 3D cell segmentation has received much less attention. 3D segmentation faces significant challenges, including limited training data availability due to the difficulty of the task for human annotators, and inherent three-dimensional complexity. As a result, existing 3D cell segmentation methods often lack broad applicability across different imaging modalities. Results To address this, we developed a generalizable approach for using 2D cell segmentation methods to produce accurate 3D cell segmentations. We implemented this approach in 3DCellComposer, a versatile, open-source package that allows users to choose any existing 2D segmentation model appropriate for their tissue or cell type(s) without requiring any additional training. Importantly, we have enhanced our open source CellSegmentationEvaluator quality evaluation tool to support 3D images. It provides metrics that allow selection of the best approach for a given imaging source and modality, without the need for human annotations to assess performance. Using these metrics, we demonstrated that our approach produced high-quality 3D segmentations of tissue images, and that it could outperform an existing 3D segmentation method on the cell culture images with which it was trained. Conclusions 3DCellComposer, when paired with well-trained 2D segmentation models, provides an important alternative to acquiring human-annotated 3D images for new sample types or imaging modalities and then training 3D segmentation models using them. It is expected to be of significant value for large scale projects such as the Human BioMolecular Atlas Program.
Collapse
Affiliation(s)
- Haoran Chen
- Computational Biology Department, School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh PA 15213, USA
| | - Robert F Murphy
- Computational Biology Department, School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh PA 15213, USA
| |
Collapse
|
13
|
Bommanapally V, Abeyrathna D, Chundi P, Subramaniam M. Super resolution-based methodology for self-supervised segmentation of microscopy images. Front Microbiol 2024; 15:1255850. [PMID: 38533330 PMCID: PMC10963421 DOI: 10.3389/fmicb.2024.1255850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Accepted: 02/15/2024] [Indexed: 03/28/2024] Open
Abstract
Data-driven Artificial Intelligence (AI)/Machine learning (ML) image analysis approaches have gained a lot of momentum in analyzing microscopy images in bioengineering, biotechnology, and medicine. The success of these approaches crucially relies on the availability of high-quality microscopy images, which is often a challenge due to the diverse experimental conditions and modes under which these images are obtained. In this study, we propose the use of recent ML-based image super-resolution (SR) techniques for improving the image quality of microscopy images, incorporating them into multiple ML-based image analysis tasks, and describing a comprehensive study, investigating the impact of SR techniques on the segmentation of microscopy images. The impacts of four Generative Adversarial Network (GAN)- and transformer-based SR techniques on microscopy image quality are measured using three well-established quality metrics. These SR techniques are incorporated into multiple deep network pipelines using supervised, contrastive, and non-contrastive self-supervised methods to semantically segment microscopy images from multiple datasets. Our results show that the image quality of microscopy images has a direct influence on the ML model performance and that both supervised and self-supervised network pipelines using SR images perform better by 2%-6% in comparison to baselines, not using SR. Based on our experiments, we also establish that the image quality improvement threshold range [20-64] for the complemented Perception-based Image Quality Evaluator(PIQE) metric can be used as a pre-condition by domain experts to incorporate SR techniques to significantly improve segmentation performance. A plug-and-play software platform developed to integrate SR techniques with various deep networks using supervised and self-supervised learning methods is also presented.
Collapse
Affiliation(s)
- Vidya Bommanapally
- Department of Computer Science, University of Nebraska, Omaha, NE, United States
| | | | | | | |
Collapse
|
14
|
Israel U, Marks M, Dilip R, Li Q, Yu C, Laubscher E, Li S, Schwartz M, Pradhan E, Ates A, Abt M, Brown C, Pao E, Pearson-Goulart A, Perona P, Gkioxari G, Barnowski R, Yue Y, Valen DV. A Foundation Model for Cell Segmentation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.11.17.567630. [PMID: 38045277 PMCID: PMC10690226 DOI: 10.1101/2023.11.17.567630] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
Cells are a fundamental unit of biological organization, and identifying them in imaging data - cell segmentation - is a critical task for various cellular imaging experiments. While deep learning methods have led to substantial progress on this problem, most models in use are specialist models that work well for specific domains. Methods that have learned the general notion of "what is a cell" and can identify them across different domains of cellular imaging data have proven elusive. In this work, we present CellSAM, a foundation model for cell segmentation that generalizes across diverse cellular imaging data. CellSAM builds on top of the Segment Anything Model (SAM) by developing a prompt engineering approach for mask generation. We train an object detector, CellFinder, to automatically detect cells and prompt SAM to generate segmentations. We show that this approach allows a single model to achieve human-level performance for segmenting images of mammalian cells (in tissues and cell culture), yeast, and bacteria collected across various imaging modalities. We show that CellSAM has strong zero-shot performance and can be improved with a few examples via few-shot learning. We also show that CellSAM can unify bioimaging analysis workflows such as spatial transcriptomics and cell tracking. A deployed version of CellSAM is available at https://cellsam.deepcell.org/.
Collapse
Affiliation(s)
- Uriah Israel
- Division of Biology and Biological Engineering, Caltech
- Division of Computing and Mathematical Science, Caltech
| | - Markus Marks
- Division of Engineering and Applied Science, Caltech
- Division of Computing and Mathematical Science, Caltech
| | - Rohit Dilip
- Division of Computing and Mathematical Science, Caltech
| | - Qilin Li
- Division of Engineering and Applied Science, Caltech
| | - Changhua Yu
- Division of Biology and Biological Engineering, Caltech
| | | | - Shenyi Li
- Division of Biology and Biological Engineering, Caltech
| | | | - Elora Pradhan
- Division of Biology and Biological Engineering, Caltech
| | - Ada Ates
- Division of Biology and Biological Engineering, Caltech
| | - Martin Abt
- Division of Biology and Biological Engineering, Caltech
| | - Caitlin Brown
- Division of Biology and Biological Engineering, Caltech
| | - Edward Pao
- Division of Biology and Biological Engineering, Caltech
| | | | - Pietro Perona
- Division of Engineering and Applied Science, Caltech
- Division of Computing and Mathematical Science, Caltech
| | | | | | - Yisong Yue
- Division of Computing and Mathematical Science, Caltech
| | - David Van Valen
- Division of Biology and Biological Engineering, Caltech
- Howard Hughes Medical Institute
| |
Collapse
|
15
|
Kato S, Hotta K. Automatic enhancement preprocessing for segmentation of low quality cell images. Sci Rep 2024; 14:3619. [PMID: 38351053 PMCID: PMC10864346 DOI: 10.1038/s41598-024-53411-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 01/31/2024] [Indexed: 02/16/2024] Open
Abstract
We present a novel automatic preprocessing and ensemble learning technique for the segmentation of low-quality cell images. Capturing cells subjected to intense light is challenging due to their vulnerability to light-induced cell death. Consequently, microscopic cell images tend to be of low quality and it causes low accuracy for semantic segmentation. This problem can not be satisfactorily solved by classical image preprocessing methods. Therefore, we propose a novel approach of automatic enhancement preprocessing (AEP), which translates an input image into images that are easy to recognize by deep learning. AEP is composed of two deep neural networks, and the penultimate feature maps of the first network are employed as filters to translate an input image with low quality into images that are easily classified by deep learning. Additionally, we propose an automatic weighted ensemble learning (AWEL), which combines the multiple segmentation results. Since the second network predicts segmentation results corresponding to each translated input image, multiple segmentation results can be aggregated by automatically determining suitable weights. Experiments on two types of cell image segmentation confirmed that AEP can translate low-quality cell images into images that are easy to segment and that segmentation accuracy improves using AWEL.
Collapse
Affiliation(s)
- Sota Kato
- Department of Electrical, Information, Materials and Materials Engineering, Graduate School of Science and Engineering, Meijo University, Shiogamaguchi, Tempaku-ku, Nagoya, Aichi, 468-8502, Japan.
| | - Kazuhiro Hotta
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Meijo University, Nagoya, Aichi, Japan
| |
Collapse
|
16
|
Brun A, Mougeot G, Denis P, Collin ML, Pouchin P, Montaurier C, Walrand S, Capel F, Gueugneau M. A new bio imagery user-friendly tool for automatic morphometry measurement on muscle cell cultures and histological sections. Sci Rep 2024; 14:3108. [PMID: 38326394 PMCID: PMC11269594 DOI: 10.1038/s41598-024-53658-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 02/03/2024] [Indexed: 02/09/2024] Open
Abstract
TRUEFAD (TRUE Fiber Atrophy Distinction) is a bioimagery user-friendly tool developed to allow consistent and automatic measurement of myotube diameter in vitro, muscle fiber size and type using rodents and human muscle biopsies. This TRUEFAD package was set up to standardize and dynamize muscle research via easy-to-obtain images run on an open-source plugin for FIJI. We showed here both the robustness and the performance of our pipelines to correctly segment muscle cells and fibers. We evaluated our pipeline on real experiment image sets and showed consistent reliability across images and conditions. TRUEFAD development makes possible systematical and rapid screening of substances impacting muscle morphology for helping scientists focus on their hypothesis rather than image analysis.
Collapse
Affiliation(s)
- Aurélien Brun
- UMR1019 Unité de Nutrition Humaine (UNH), INRAE, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Guillaume Mougeot
- iGReD CNRS, INSERM Université Clermont Auvergne, Clermont-Ferrand, France
| | - Philippe Denis
- UMR1019 Unité de Nutrition Humaine (UNH), INRAE, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Marie Laure Collin
- UMR1019 Unité de Nutrition Humaine (UNH), INRAE, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Pierre Pouchin
- iGReD CNRS, INSERM Université Clermont Auvergne, Clermont-Ferrand, France
| | - Christophe Montaurier
- UMR1019 Unité de Nutrition Humaine (UNH), INRAE, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Stéphane Walrand
- UMR1019 Unité de Nutrition Humaine (UNH), INRAE, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Frédéric Capel
- UMR1019 Unité de Nutrition Humaine (UNH), INRAE, Université Clermont Auvergne, Clermont-Ferrand, France.
| | - Marine Gueugneau
- UMR1019 Unité de Nutrition Humaine (UNH), INRAE, Université Clermont Auvergne, Clermont-Ferrand, France
| |
Collapse
|
17
|
Hirling D, Tasnadi E, Caicedo J, Caroprese MV, Sjögren R, Aubreville M, Koos K, Horvath P. Segmentation metric misinterpretations in bioimage analysis. Nat Methods 2024; 21:213-216. [PMID: 37500758 PMCID: PMC10864175 DOI: 10.1038/s41592-023-01942-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 06/06/2023] [Indexed: 07/29/2023]
Abstract
Quantitative evaluation of image segmentation algorithms is crucial in the field of bioimage analysis. The most common assessment scores, however, are often misinterpreted and multiple definitions coexist with the same name. Here we present the ambiguities of evaluation metrics for segmentation algorithms and show how these misinterpretations can alter leaderboards of influential competitions. We also propose guidelines for how the currently existing problems could be tackled.
Collapse
Affiliation(s)
- Dominik Hirling
- Biological Research Centre, Eötvös Loránd Research Network (ELKH), Szeged, Hungary
- Doctoral School of Computer Science, University of Szeged, Szeged, Hungary
| | - Ervin Tasnadi
- Biological Research Centre, Eötvös Loránd Research Network (ELKH), Szeged, Hungary
- Doctoral School of Computer Science, University of Szeged, Szeged, Hungary
| | - Juan Caicedo
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | | | - Rickard Sjögren
- Sartorius, Corporate Research, Umeå, Sweden
- CellVoyant Technologies Ltd, Bristol, UK
| | | | - Krisztian Koos
- Biological Research Centre, Eötvös Loránd Research Network (ELKH), Szeged, Hungary
| | - Peter Horvath
- Biological Research Centre, Eötvös Loránd Research Network (ELKH), Szeged, Hungary.
- Single-Cell Technologies Ltd, Szeged, Hungary.
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, Helsinki, Finland.
| |
Collapse
|
18
|
Wan Z, Li M, Wang Z, Tan H, Li W, Yu L, Samuel DJ. CellT-Net: A Composite Transformer Method for 2-D Cell Instance Segmentation. IEEE J Biomed Health Inform 2024; 28:730-741. [PMID: 37023158 DOI: 10.1109/jbhi.2023.3265006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
Abstract
Cell instance segmentation (CIS) via light microscopy and artificial intelligence (AI) is essential to cell and gene therapy-based health care management, which offers the hope of revolutionary health care. An effective CIS method can help clinicians to diagnose neurological disorders and quantify how well these deadly disorders respond to treatment. To address the CIS task challenged by dataset characteristics such as irregular morphology, variation in sizes, cell adhesion, and obscure contours, we propose a novel deep learning model named CellT-Net to actualize effective cell instance segmentation. In particular, the Swin transformer (Swin-T) is used as the basic model to construct the CellT-Net backbone, as the self-attention mechanism can adaptively focus on useful image regions while suppressing irrelevant background information. Moreover, CellT-Net incorporating Swin-T constructs a hierarchical representation and generates multi-scale feature maps that are suitable for detecting and segmenting cells at different scales. A novel composite style named cross-level composition (CLC) is proposed to build composite connections between identical Swin-T models in the CellT-Net backbone and generate more representational features. The earth mover's distance (EMD) loss and binary cross entropy loss are used to train CellT-Net and actualize the precise segmentation of overlapped cells. The LiveCELL and Sartorius datasets are utilized to validate the model effectiveness, and the results demonstrate that CellT-Net can achieve better model performance for dealing with the challenges arising from the characteristics of cell datasets than state-of-the-art models.
Collapse
|
19
|
Gogoberidze N, Cimini BA. Defining the boundaries: challenges and advances in identifying cells in microscopy images. Curr Opin Biotechnol 2024; 85:103055. [PMID: 38142646 PMCID: PMC11170924 DOI: 10.1016/j.copbio.2023.103055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 11/28/2023] [Accepted: 11/28/2023] [Indexed: 12/26/2023]
Abstract
Segmentation, or the outlining of objects within images, is a critical step in the measurement and analysis of cells within microscopy images. While improvements continue to be made in tools that rely on classical methods for segmentation, deep learning-based tools increasingly dominate advances in the technology. Specialist models such as Cellpose continue to improve in accuracy and user-friendliness, and segmentation challenges such as the Multi-Modality Cell Segmentation Challenge continue to push innovation in accuracy across widely varying test data as well as efficiency and usability. Increased attention on documentation, sharing, and evaluation standards is leading to increased user-friendliness and acceleration toward the goal of a truly universal method.
Collapse
Affiliation(s)
| | - Beth A Cimini
- Imaging Platform, Broad Institute, Cambridge, MA 02142, USA.
| |
Collapse
|
20
|
Wang Y, Zhao J, Xu H, Han C, Tao Z, Zhao D, Zhou D, Tong G, Liu D, Ji Z. A systematic evaluation of computation methods for cell segmentation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.28.577670. [PMID: 38352578 PMCID: PMC10862744 DOI: 10.1101/2024.01.28.577670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Cell segmentation is a fundamental task in analyzing biomedical images. Many computational methods have been developed for cell segmentation, but their performances are not well understood in various scenarios. We systematically evaluated the performance of 18 segmentation methods to perform cell nuclei and whole cell segmentation using light microscopy and fluorescence staining images. We found that general-purpose methods incorporating the attention mechanism exhibit the best overall performance. We identified various factors influencing segmentation performances, including training data and cell morphology, and evaluated the generalizability of methods across image modalities. We also provide guidelines for choosing the optimal segmentation methods in various real application scenarios. We developed Seggal, an online resource for downloading segmentation models already pre-trained with various tissue and cell types, which substantially reduces the time and effort for training cell segmentation models.
Collapse
Affiliation(s)
- Yuxing Wang
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| | - Junhan Zhao
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Department of Biostatistics, Harvard T.H.Chan School of Public Health, Boston, MA, USA
| | - Hongye Xu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Cheng Han
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Zhiqiang Tao
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Dongfang Zhao
- Department of Computer Science & eScience Institute, University of Washington, Seattle, WA, USA
| | - Dawei Zhou
- Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Gang Tong
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, USA
| | - Dongfang Liu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Zhicheng Ji
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
21
|
Xun D, Wang R, Zhang X, Wang Y. Microsnoop: A generalist tool for microscopy image representation. Innovation (N Y) 2024; 5:100541. [PMID: 38235187 PMCID: PMC10794109 DOI: 10.1016/j.xinn.2023.100541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 11/17/2023] [Indexed: 01/19/2024] Open
Abstract
Accurate profiling of microscopy images from small scale to high throughput is an essential procedure in basic and applied biological research. Here, we present Microsnoop, a novel deep learning-based representation tool trained on large-scale microscopy images using masked self-supervised learning. Microsnoop can process various complex and heterogeneous images, and we classified images into three categories: single-cell, full-field, and batch-experiment images. Our benchmark study on 10 high-quality evaluation datasets, containing over 2,230,000 images, demonstrated Microsnoop's robust and state-of-the-art microscopy image representation ability, surpassing existing generalist and even several custom algorithms. Microsnoop can be integrated with other pipelines to perform tasks such as superresolution histopathology image and multimodal analysis. Furthermore, Microsnoop can be adapted to various hardware and can be easily deployed on local or cloud computing platforms. We will regularly retrain and reevaluate the model using community-contributed data to consistently improve Microsnoop.
Collapse
Affiliation(s)
- Dejin Xun
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China
| | - Rui Wang
- State Key Lab of Computer-Aided Design & Computer Graphics, Zhejiang University, Hangzhou 310058, China
| | - Xingcai Zhang
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
| | - Yi Wang
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China
- Innovation Institute for Artificial Intelligence in Medicine of Zhejiang University, Hangzhou 310018, China
- National Key Laboratory of Chinese Medicine Modernization, Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing 314100, China
| |
Collapse
|
22
|
Sonneck J, Zhou Y, Chen J. MMV_Im2Im: an open-source microscopy machine vision toolbox for image-to-image transformation. Gigascience 2024; 13:giad120. [PMID: 38280188 PMCID: PMC10821710 DOI: 10.1093/gigascience/giad120] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 09/30/2023] [Accepted: 12/28/2023] [Indexed: 01/29/2024] Open
Abstract
Over the past decade, deep learning (DL) research in computer vision has been growing rapidly, with many advances in DL-based image analysis methods for biomedical problems. In this work, we introduce MMV_Im2Im, a new open-source Python package for image-to-image transformation in bioimaging applications. MMV_Im2Im is designed with a generic image-to-image transformation framework that can be used for a wide range of tasks, including semantic segmentation, instance segmentation, image restoration, image generation, and so on. Our implementation takes advantage of state-of-the-art machine learning engineering techniques, allowing researchers to focus on their research without worrying about engineering details. We demonstrate the effectiveness of MMV_Im2Im on more than 10 different biomedical problems, showcasing its general potentials and applicabilities. For computational biomedical researchers, MMV_Im2Im provides a starting point for developing new biomedical image analysis or machine learning algorithms, where they can either reuse the code in this package or fork and extend this package to facilitate the development of new methods. Experimental biomedical researchers can benefit from this work by gaining a comprehensive view of the image-to-image transformation concept through diversified examples and use cases. We hope this work can give the community inspirations on how DL-based image-to-image transformation can be integrated into the assay development process, enabling new biomedical studies that cannot be done only with traditional experimental assays. To help researchers get started, we have provided source code, documentation, and tutorials for MMV_Im2Im at [https://github.com/MMV-Lab/mmv_im2im] under MIT license.
Collapse
Affiliation(s)
- Justin Sonneck
- Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V., Bunsen-Kirchhoff-Str. 11, Dortmund 44139, Germany
- Faculty of Computer Science, Ruhr-University Bochum, Universitätsstraße 150, Bochum 44801, Germany
| | - Yu Zhou
- Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V., Bunsen-Kirchhoff-Str. 11, Dortmund 44139, Germany
| | - Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V., Bunsen-Kirchhoff-Str. 11, Dortmund 44139, Germany
| |
Collapse
|
23
|
Hou W, Ji Z. GPT-4V exhibits human-like performance in biomedical image classification. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.31.573796. [PMID: 38260646 PMCID: PMC10802384 DOI: 10.1101/2023.12.31.573796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
We demonstrate that GPT-4V(ision), a large multimodal model, exhibits strong one-shot learning ability, generalizability, and natural language interpretability in various biomedical image classification tasks, including classifying cell types, tissues, cell states, and disease status. Such features resemble human-like performance and distinguish GPT-4V from conventional image classification methods, which typically require large cohorts of training data and lack interpretability.
Collapse
Affiliation(s)
- Wenpin Hou
- Department of Biostatistics, The Mailman School of Public Health, Columbia University, New York City, NY, USA
| | - Zhicheng Ji
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
24
|
Ouderkirk S, Sedley A, Ong M, Shifflet MR, Harkrider QC, Wright NT, Miller CJ. A Perspective on Developing Modeling and Image Analysis Tools to Investigate Mechanosensing Proteins. Integr Comp Biol 2023; 63:1532-1542. [PMID: 37558388 PMCID: PMC10755202 DOI: 10.1093/icb/icad107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 07/17/2023] [Accepted: 07/17/2023] [Indexed: 08/11/2023] Open
Abstract
The shift of funding organizations to prioritize interdisciplinary work points to the need for workflow models that better accommodate interdisciplinary studies. Most scientists are trained in a specific field and are often unaware of the kind of insights that other disciplines could contribute to solving various problems. In this paper, we present a perspective on how we developed an experimental pipeline between a microscopy and image analysis/bioengineering lab. Specifically, we connected microscopy observations about a putative mechanosensing protein, obscurin, to image analysis techniques that quantify cell changes. While the individual methods used are well established (fluorescence microscopy; ImageJ WEKA and mTrack2 programs; MATLAB), there are no existing best practices for how to integrate these techniques into a cohesive, interdisciplinary narrative. Here, we describe a broadly applicable workflow of how microscopists can more easily quantify cell properties (e.g., perimeter, velocity) from microscopy videos of eukaryotic (MDCK) adherent cells. Additionally, we give examples of how these foundational measurements can create more complex, customizable cell mechanics tools and models.
Collapse
Affiliation(s)
- Stephanie Ouderkirk
- Department of Chemistry, James Madison University, Harrisonburg, VA 22807, USA
| | - Alex Sedley
- Department of Engineering, James Madison University, Harrisonburg, VA 22807, USA
| | - Mason Ong
- Department of Engineering, James Madison University, Harrisonburg, VA 22807, USA
| | - Mary Ruth Shifflet
- Department of Chemistry, Bridgewater College, Bridgewater, VA 22812, USA
| | - Quinn C Harkrider
- Department of Chemistry, James Madison University, Harrisonburg, VA 22807, USA
| | - Nathan T Wright
- Department of Chemistry, James Madison University, Harrisonburg, VA 22807, USA
| | - Callie J Miller
- Department of Engineering, James Madison University, Harrisonburg, VA 22807, USA
| |
Collapse
|
25
|
Meierrieks F, Kour A, Pätz M, Pflanz K, Wolff MW, Pickl A. Unveiling the secrets of adeno-associated virus: novel high-throughput approaches for the quantification of multiple serotypes. Mol Ther Methods Clin Dev 2023; 31:101118. [PMID: 37822717 PMCID: PMC10562196 DOI: 10.1016/j.omtm.2023.101118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 09/14/2023] [Indexed: 10/13/2023]
Abstract
Adeno-associated virus (AAV) vectors are among the most prominent viral vectors for in vivo gene therapy, and their investigation and development using high-throughput techniques have gained increasing interest. However, sample throughput remains a bottleneck in most analytical assays. In this study, we compared commonly used analytical methods for AAV genome titer, capsid titer, and transducing titer determination with advanced methods using AAV2, AAV5, and AAV8 as representative examples. For the determination of genomic titers, we evaluated the suitability of qPCR and four different digital PCR methods and assessed the respective advantages and limitations of each method. We found that both ELISA and bio-layer interferometry provide comparable capsid titers, with bio-layer interferometry reducing the workload and having a 2.8-fold higher linear measurement range. Determination of the transducing titer demonstrated that live-cell analysis required less manual effort compared with flow cytometry. Both techniques had a similar linear range of detection, and no statistically significant differences in transducing titers were observed. This study demonstrated that the use of advanced analytical methods provides faster and more robust results while simultaneously increasing sample throughput and reducing active bench work time.
Collapse
Affiliation(s)
- Frederik Meierrieks
- Lab Essentials Applications Development, Sartorius Lab Instruments GmbH & Co. KG, Otto-Brenner-Straße 20, 37079 Göttingen, Germany
| | - Ahmad Kour
- Lab Essentials Applications Development, Sartorius Lab Instruments GmbH & Co. KG, Otto-Brenner-Straße 20, 37079 Göttingen, Germany
| | - Marvin Pätz
- Lab Essentials Applications Development, Sartorius Stedim Biotech GmbH, August-Spindler-Straße 11, 37079 Göttingen, Germany
| | - Karl Pflanz
- Lab Essentials Applications Development, Sartorius Stedim Biotech GmbH, August-Spindler-Straße 11, 37079 Göttingen, Germany
| | - Michael W. Wolff
- Institute of Bioprocess Engineering and Pharmaceutical Technology, University of Applied Sciences Mittelhessen (THM), 35390 Giessen, Germany
- Fraunhofer Institute for Molecular Biology and Applied Ecology (IME), 35392 Giessen, Germany
| | - Andreas Pickl
- Lab Essentials Applications Development, Sartorius Lab Instruments GmbH & Co. KG, Otto-Brenner-Straße 20, 37079 Göttingen, Germany
| |
Collapse
|
26
|
Yu J. Point-Supervised Single-Cell Segmentation via Collaborative Knowledge Sharing. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3884-3894. [PMID: 37676808 DOI: 10.1109/tmi.2023.3312988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
Despite their superior performance, deep-learning methods often suffer from the disadvantage of needing large-scale well-annotated training data. In response, recent literature has seen a proliferation of efforts aimed at reducing the annotation burden. This paper focuses on a weakly-supervised training setting for single-cell segmentation models, where the only available training label is the rough locations of individual cells. The specific problem is of practical interest due to the widely available nuclei counter-stain data in biomedical literature, from which the cell locations can be derived programmatically. Of more general interest is a proposed self-learning method called collaborative knowledge sharing, which is related to but distinct from the more well-known consistency learning methods. This strategy achieves self-learning by sharing knowledge between a principal model and a very light-weight collaborator model. Importantly, the two models are entirely different in their architectures, capacities, and model outputs: In our case, the principal model approaches the segmentation problem from an object-detection perspective, whereas the collaborator model a sematic segmentation perspective. We assessed the effectiveness of this strategy by conducting experiments on LIVECell, a large single-cell segmentation dataset of bright-field images, and on A431 dataset, a fluorescence image dataset in which the location labels are generated automatically from nuclei counter-stain data. Implementing code is available at https://github.com/jiyuuchc/lacss.
Collapse
|
27
|
Kataria T, Rajamani S, Ayubi AB, Bronner M, Jedrzkiewicz J, Knudsen BS, Elhabian SY. Automating Ground Truth Annotations for Gland Segmentation Through Immunohistochemistry. Mod Pathol 2023; 36:100331. [PMID: 37716506 DOI: 10.1016/j.modpat.2023.100331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 08/14/2023] [Accepted: 09/08/2023] [Indexed: 09/18/2023]
Abstract
Microscopic evaluation of glands in the colon is of utmost importance in the diagnosis of inflammatory bowel disease and cancer. When properly trained, deep learning pipelines can provide a systematic, reproducible, and quantitative assessment of disease-related changes in glandular tissue architecture. The training and testing of deep learning models require large amounts of manual annotations, which are difficult, time-consuming, and expensive to obtain. Here, we propose a method for automated generation of ground truth in digital hematoxylin and eosin (H&E)-stained slides using immunohistochemistry (IHC) labels. The image processing pipeline generates annotations of glands in H&E histopathology images from colon biopsy specimens by transfer of gland masks from KRT8/18, CDX2, or EPCAM IHC. The IHC gland outlines are transferred to coregistered H&E images for training of deep learning models. We compared the performance of the deep learning models to that of manual annotations using an internal held-out set of biopsy specimens as well as 2 public data sets. Our results show that EPCAM IHC provides gland outlines that closely match manual gland annotations (Dice = 0.89) and are resilient to damage by inflammation. In addition, we propose a simple data sampling technique that allows models trained on data from several sources to be adapted to a new data source using just a few newly annotated samples. The best performing models achieved average Dice scores of 0.902 and 0.89 on Gland Segmentation and Colorectal Adenocarcinoma Gland colon cancer public data sets, respectively, when trained with only 10% of annotated cases from either public cohort. Altogether, the performances of our models indicate that automated annotations using cell type-specific IHC markers can safely replace manual annotations. Automated IHC labels from single-institution cohorts can be combined with small numbers of hand-annotated cases from multi-institutional cohorts to train models that generalize well to diverse data sources.
Collapse
Affiliation(s)
- Tushar Kataria
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah; Kahlert School of Computing, Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Saradha Rajamani
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah; Kahlert School of Computing, Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Abdul Bari Ayubi
- Department of Pathology, University of Utah, Salt Lake City, Utah
| | - Mary Bronner
- Department of Pathology, University of Utah, Salt Lake City, Utah; Department of Pathology, ARUP Laboratories, Salt Lake City, Utah
| | - Jolanta Jedrzkiewicz
- Department of Pathology, University of Utah, Salt Lake City, Utah; Department of Pathology, ARUP Laboratories, Salt Lake City, Utah
| | - Beatrice S Knudsen
- Kahlert School of Computing, Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah; Department of Pathology, University of Utah, Salt Lake City, Utah.
| | - Shireen Y Elhabian
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah; Kahlert School of Computing, Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah.
| |
Collapse
|
28
|
Alieva M, Wezenaar AKL, Wehrens EJ, Rios AC. Bridging live-cell imaging and next-generation cancer treatment. Nat Rev Cancer 2023; 23:731-745. [PMID: 37704740 DOI: 10.1038/s41568-023-00610-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/25/2023] [Indexed: 09/15/2023]
Abstract
By providing spatial, molecular and morphological data over time, live-cell imaging can provide a deeper understanding of the cellular and signalling events that determine cancer response to treatment. Understanding this dynamic response has the potential to enhance clinical outcome by identifying biomarkers or actionable targets to improve therapeutic efficacy. Here, we review recent applications of live-cell imaging for uncovering both tumour heterogeneity in treatment response and the mode of action of cancer-targeting drugs. Given the increasing uses of T cell therapies, we discuss the unique opportunity of time-lapse imaging for capturing the interactivity and motility of immunotherapies. Although traditionally limited in the number of molecular features captured, novel developments in multidimensional imaging and multi-omics data integration offer strategies to connect single-cell dynamics to molecular phenotypes. We review the effect of these recent technological advances on our understanding of the cellular dynamics of tumour targeting and discuss their implication for next-generation precision medicine.
Collapse
Affiliation(s)
- Maria Alieva
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
- Instituto de Investigaciones Biomedicas Sols-Morreale (IIBM), CSIC-UAM, Madrid, Spain
| | - Amber K L Wezenaar
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
- Oncode Institute, Utrecht, The Netherlands
| | - Ellen J Wehrens
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands.
- Oncode Institute, Utrecht, The Netherlands.
| | - Anne C Rios
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands.
- Oncode Institute, Utrecht, The Netherlands.
| |
Collapse
|
29
|
Goyal V, Schaub NJ, Voss TC, Hotaling NA. Unbiased image segmentation assessment toolkit for quantitative differentiation of state-of-the-art algorithms and pipelines. BMC Bioinformatics 2023; 24:388. [PMID: 37828466 PMCID: PMC10568754 DOI: 10.1186/s12859-023-05486-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 09/18/2023] [Indexed: 10/14/2023] Open
Abstract
BACKGROUND Image segmentation pipelines are commonly used in microscopy to identify cellular compartments like nucleus and cytoplasm, but there are few standards for comparing segmentation accuracy across pipelines. The process of selecting a segmentation assessment pipeline can seem daunting to researchers due to the number and variety of metrics available for evaluating segmentation quality. RESULTS Here we present automated pipelines to obtain a comprehensive set of 69 metrics to evaluate segmented data and propose a selection methodology for models based on quantitative analysis, dimension reduction or unsupervised classification techniques and informed selection criteria. CONCLUSION We show that the metrics used here can often be reduced to a small number of metrics that give a more complete understanding of segmentation accuracy, with different groups of metrics providing sensitivity to different types of segmentation error. These tools are delivered as easy to use python libraries, command line tools, Common Workflow Language Tools, and as Web Image Processing Pipeline interactive plugins to ensure a wide range of users can access and use them. We also present how our evaluation methods can be used to observe the changes in segmentations across modern machine learning/deep learning workflows and use cases.
Collapse
Affiliation(s)
- Vishakha Goyal
- Information Research Technology Branch (ITRB), National Center for Advancing Translational Science (NCATS), National Institutes of Health (NIH), 9800 Medical Center Dr, Rockville, MD, 20850, USA
- Axle Research and Technologies, 6116 Executive Blvd #400, Rockville, MD, 20852, USA
| | - Nick J Schaub
- Information Research Technology Branch (ITRB), National Center for Advancing Translational Science (NCATS), National Institutes of Health (NIH), 9800 Medical Center Dr, Rockville, MD, 20850, USA
- Axle Research and Technologies, 6116 Executive Blvd #400, Rockville, MD, 20852, USA
| | - Ty C Voss
- Information Research Technology Branch (ITRB), National Center for Advancing Translational Science (NCATS), National Institutes of Health (NIH), 9800 Medical Center Dr, Rockville, MD, 20850, USA
- Axle Research and Technologies, 6116 Executive Blvd #400, Rockville, MD, 20852, USA
| | - Nathan A Hotaling
- Information Research Technology Branch (ITRB), National Center for Advancing Translational Science (NCATS), National Institutes of Health (NIH), 9800 Medical Center Dr, Rockville, MD, 20850, USA.
- Axle Research and Technologies, 6116 Executive Blvd #400, Rockville, MD, 20852, USA.
| |
Collapse
|
30
|
Antonelli L, Polverino F, Albu A, Hada A, Asteriti IA, Degrassi F, Guarguaglini G, Maddalena L, Guarracino MR. ALFI: Cell cycle phenotype annotations of label-free time-lapse imaging data from cultured human cells. Sci Data 2023; 10:677. [PMID: 37794110 PMCID: PMC10551030 DOI: 10.1038/s41597-023-02540-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 09/05/2023] [Indexed: 10/06/2023] Open
Abstract
Detecting and tracking multiple moving objects in a video is a challenging task. For living cells, the task becomes even more arduous as cells change their morphology over time, can partially overlap, and mitosis leads to new cells. Differently from fluorescence microscopy, label-free techniques can be easily applied to almost all cell lines, reducing sample preparation complexity and phototoxicity. In this study, we present ALFI, a dataset of images and annotations for label-free microscopy, made publicly available to the scientific community, that notably extends the current panorama of expertly labeled data for detection and tracking of cultured living nontransformed and cancer human cells. It consists of 29 time-lapse image sequences from HeLa, U2OS, and hTERT RPE-1 cells under different experimental conditions, acquired by differential interference contrast microscopy, for a total of 237.9 hours. It contains various annotations (pixel-wise segmentation masks, object-wise bounding boxes, tracking information). The dataset is useful for testing and comparing methods for identifying interphase and mitotic events and reconstructing their lineage, and for discriminating different cellular phenotypes.
Collapse
Affiliation(s)
- Laura Antonelli
- ICAR, Institute for High-Performance Computing and Networking, National Research Council, Naples, Italy
| | - Federica Polverino
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy
| | - Alexandra Albu
- Department of Economics and Law, University of Cassino and Southern Lazio, Cassino, Italy
| | - Aroj Hada
- Department of Economics and Law, University of Cassino and Southern Lazio, Cassino, Italy
| | - Italia A Asteriti
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy
| | - Francesca Degrassi
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy
| | - Giulia Guarguaglini
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy.
| | - Lucia Maddalena
- ICAR, Institute for High-Performance Computing and Networking, National Research Council, Naples, Italy.
| | - Mario R Guarracino
- Department of Economics and Law, University of Cassino and Southern Lazio, Cassino, Italy
- Laboratory of Algorithms and Technologies for Networks Analysis, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
31
|
Jiang J, Zeng Z, Xu J, Wang W, Shi B, Zhu L, Chen Y, Yao W, Wang Y, Zhang H. Long-term, real-time and label-free live cell image processing and analysis based on a combined algorithm of CellPose and watershed segmentation. Heliyon 2023; 9:e20181. [PMID: 37767498 PMCID: PMC10520323 DOI: 10.1016/j.heliyon.2023.e20181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 09/06/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023] Open
Abstract
Developing a rapid and quantitative method to accurately evaluate the physiological abilities of living cells is critical for tumor control. Many experiments have been conducted in the field of biology in an attempt to measure the proliferation and movement abilities of cells, but existing methods cannot provide real-time and objective data for label-free cells. The quantitative imaging technique, including an automatic segmentation algorithm for individual label-free cells, has been a breakthrough in this regard. In this study, we develop a combined automatic image processing algorithm of CellPose and watershed segmentation for the long-term and real-time imaging of label-free cells. This method shows strong reliability in cell identification regardless of cell densities, allowing us to obtain accurate information about the number and proliferation ability of the target cells. Additionally, our results also suggest that this method is a reliable way to assess real-time data on drug cytotoxicity, cell morphology, and cell movement ability.
Collapse
Affiliation(s)
- Jiang Jiang
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, No. 197, Ruijin Er Road, Shanghai, 200025, China
| | - Zhikun Zeng
- School of Physics and Astronomy, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai, 200240, China
| | - Jiazhao Xu
- School of Physics and Astronomy, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai, 200240, China
| | - Wenfang Wang
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, No. 197, Ruijin Er Road, Shanghai, 200025, China
| | - Bowen Shi
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, No. 197, Ruijin Er Road, Shanghai, 200025, China
| | - Lan Zhu
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, No. 197, Ruijin Er Road, Shanghai, 200025, China
| | - Yong Chen
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, No. 197, Ruijin Er Road, Shanghai, 200025, China
| | - Weiwu Yao
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, No. 1111, Xianxia Road, Shanghai, 200036, China
| | - Yujie Wang
- School of Physics and Astronomy, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai, 200240, China
- State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, No. 1, Dongsanlu, Erxianqiao, Chengdu, 610059, China
- Department of Physics, College of Mathematics and Physics, Chengdu University of Technology, No. 1, Dongsanlu, Erxianqiao, Chengdu, 610059, China
| | - Huan Zhang
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, No. 197, Ruijin Er Road, Shanghai, 200025, China
| |
Collapse
|
32
|
Han S, Phasouk K, Zhu J, Fong Y. Optimizing Deep Learning-Based Segmentation of Densely Packed Cells using Cell Surface Markers. RESEARCH SQUARE 2023:rs.3.rs-3307496. [PMID: 37841876 PMCID: PMC10571619 DOI: 10.21203/rs.3.rs-3307496/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2023]
Abstract
Background Spatial molecular profiling depends on accurate cell segmentation. Identification and quantitation of individual cells in dense tissues, e.g. highly inflamed tissue caused by viral infection or immune reaction, remains a challenge. Methods We first assess the performance of 18 deep learning-based cell segmentation models, either pre-trained or trained by us using two public image sets, on a set of immunofluorescence images stained with immune cell surface markers in skin tissue obtained during human herpes simplex virus (HSV) infection. We then further train eight of these models using up to 10,000+ training instances from the current image set. Finally, we seek to improve performance by tuning parameters of the most successful method from the previous step. Results The best model before fine-tuning achieves a mean Average Precision (mAP) of 0.516. Prediction performance improves substantially after training. The best model is the cyto model from Cellpose. After training, it achieves an mAP of 0.694; with further parameter tuning, the mAP reaches 0.711. Conclusion Selecting the best model among the existing approaches and further training the model with images of interest produce the most gain in prediction performance. The performance of the resulting model compares favorably to human performance. The imperfection of the final model performance can be attributed to the moderate signal-to-noise ratio i the imageset.
Collapse
Affiliation(s)
- Sunwoo Han
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA
| | - Khamsone Phasouk
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA
| | - Jia Zhu
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA
| | - Youyi Fong
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA
| |
Collapse
|
33
|
Sockell A, Wong W, Longwell S, Vu T, Karlsson K, Mokhtari D, Schaepe J, Lo YH, Cornelius V, Kuo C, Van Valen D, Curtis C, Fordyce PM. A microwell platform for high-throughput longitudinal phenotyping and selective retrieval of organoids. Cell Syst 2023; 14:764-776.e6. [PMID: 37734323 DOI: 10.1016/j.cels.2023.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 02/24/2023] [Accepted: 08/22/2023] [Indexed: 09/23/2023]
Abstract
Organoids are powerful experimental models for studying the ontogeny and progression of various diseases including cancer. Organoids are conventionally cultured in bulk using an extracellular matrix mimic. However, bulk-cultured organoids physically overlap, making it impossible to track the growth of individual organoids over time in high throughput. Moreover, local spatial variations in bulk matrix properties make it difficult to assess whether observed phenotypic heterogeneity between organoids results from intrinsic cell differences or differences in the microenvironment. Here, we developed a microwell-based method that enables high-throughput quantification of image-based parameters for organoids grown from single cells, which can further be retrieved from their microwells for molecular profiling. Coupled with a deep learning image-processing pipeline, we characterized phenotypic traits including growth rates, cellular movement, and apical-basal polarity in two CRISPR-engineered human gastric organoid models, identifying genomic changes associated with increased growth rate and changes in accessibility and expression correlated with apical-basal polarity. A record of this paper's transparent peer review process is included in the supplemental information.
Collapse
Affiliation(s)
- Alexandra Sockell
- Department of Genetics, Stanford University, Stanford, CA 94305, USA
| | - Wing Wong
- Department of Genetics, Stanford University, Stanford, CA 94305, USA; Department of Medicine, Stanford University, Stanford, CA 94305, USA
| | - Scott Longwell
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Thy Vu
- Department of Biochemistry, UT Austin, Austin, TX 78712, USA
| | - Kasper Karlsson
- Department of Genetics, Stanford University, Stanford, CA 94305, USA; Department of Medicine, Stanford University, Stanford, CA 94305, USA
| | - Daniel Mokhtari
- Department of Biochemistry, Stanford University, Stanford, CA 94305, USA
| | - Julia Schaepe
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Yuan-Hung Lo
- Department of Medicine, Stanford University, Stanford, CA 94305, USA
| | - Vincent Cornelius
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Calvin Kuo
- Department of Medicine, Stanford University, Stanford, CA 94305, USA
| | - David Van Valen
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA 91125, USA
| | - Christina Curtis
- Department of Genetics, Stanford University, Stanford, CA 94305, USA; Department of Medicine, Stanford University, Stanford, CA 94305, USA; Department of Biomedical Data Science, Stanford University, Stanford, CA 94305, USA; Stanford Cancer Institute, Stanford University, Stanford, CA 94305, USA; Chan Zuckerberg Biohub, San Francisco, CA 94110, USA.
| | - Polly M Fordyce
- Department of Genetics, Stanford University, Stanford, CA 94305, USA; Department of Bioengineering, Stanford University, Stanford, CA 94305, USA; Chan Zuckerberg Biohub, San Francisco, CA 94110, USA; ChEM-H Institute, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
34
|
Cai Z, Chen L, Liu HL. EPC-DARTS: Efficient partial channel connection for differentiable architecture search. Neural Netw 2023; 166:344-353. [PMID: 37544091 DOI: 10.1016/j.neunet.2023.07.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 07/17/2023] [Accepted: 07/18/2023] [Indexed: 08/08/2023]
Abstract
With weight-sharing and continuous relaxation strategies, the differentiable architecture search (DARTS) proposes a fast and effective solution to perform neural network architecture search in various deep learning tasks. However, unresolved issues, such as the inefficient memory utilization, and the poor stability of the search architecture due to channels randomly selected, which has even caused performance collapses, are still perplexing researchers and practitioners. In this paper, a novel efficient channel attention mechanism based on partial channel connection for differentiable neural architecture search, termed EPC-DARTS, is proposed to address these two issues. Specifically, we design an efficient channel attention module, which is applied to capture cross-channel interactions and assign weight based on channel importance, to dramatically improve search efficiency and reduce memory occupation. Moreover, only partial channels with higher weights in the mixed calculation of operation are used through the efficient channel attention mechanism, and thus unstable network architectures obtained by the random selection operation can also be avoided in the proposed EPC-DARTS. Experimental results show that the proposed EPC-DARTS achieves remarkably competitive performance (CIFAR-10/CIFAR-100: a test accuracy rate of 97.60%/84.02%), compared to other state-of-the-art NAS methods using only 0.2 GPU-Days.
Collapse
Affiliation(s)
- Zicheng Cai
- School of Mathematics and Statistics, Guangdong University of Technology, Guangzhou, China
| | - Lei Chen
- School of Mathematics and Statistics, Guangdong University of Technology, Guangzhou, China.
| | - Hai-Lin Liu
- School of Mathematics and Statistics, Guangdong University of Technology, Guangzhou, China
| |
Collapse
|
35
|
Corallo D, Dalla Vecchia M, Lazic D, Taschner-Mandl S, Biffi A, Aveic S. The molecular basis of tumor metastasis and current approaches to decode targeted migration-promoting events in pediatric neuroblastoma. Biochem Pharmacol 2023; 215:115696. [PMID: 37481138 DOI: 10.1016/j.bcp.2023.115696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 07/12/2023] [Accepted: 07/12/2023] [Indexed: 07/24/2023]
Abstract
Cell motility is a crucial biological process that plays a critical role in the development of multicellular organisms and is essential for tissue formation and regeneration. However, uncontrolled cell motility can lead to the development of various diseases, including neoplasms. In this review, we discuss recent advances in the discovery of regulatory mechanisms underlying the metastatic spread of neuroblastoma, a solid pediatric tumor that originates in the embryonic migratory cells of the neural crest. The highly motile phenotype of metastatic neuroblastoma cells requires targeting of intracellular and extracellular processes, that, if affected, would be helpful for the treatment of high-risk patients with neuroblastoma, for whom current therapies remain inadequate. Development of new potentially migration-inhibiting compounds and standardized preclinical approaches for the selection of anti-metastatic drugs in neuroblastoma will also be discussed.
Collapse
Affiliation(s)
- Diana Corallo
- Laboratory of Target Discovery and Biology of Neuroblastoma, Istituto di Ricerca Pediatrica (IRP), Fondazione Città della Speranza, 35127 Padova, Italy
| | - Marco Dalla Vecchia
- Laboratory of Target Discovery and Biology of Neuroblastoma, Istituto di Ricerca Pediatrica (IRP), Fondazione Città della Speranza, 35127 Padova, Italy
| | - Daria Lazic
- St. Anna Children's Cancer Research Institute, CCRI, Zimmermannplatz 10, 1090, Vienna, Austria
| | - Sabine Taschner-Mandl
- St. Anna Children's Cancer Research Institute, CCRI, Zimmermannplatz 10, 1090, Vienna, Austria
| | - Alessandra Biffi
- Pediatric Hematology, Oncology and Stem Cell Transplant Division, Woman's and Child Health Department, University of Padova, 35121 Padova, Italy
| | - Sanja Aveic
- Laboratory of Target Discovery and Biology of Neuroblastoma, Istituto di Ricerca Pediatrica (IRP), Fondazione Città della Speranza, 35127 Padova, Italy.
| |
Collapse
|
36
|
Piansaddhayanon C, Koracharkornradt C, Laosaengpha N, Tao Q, Ingrungruanglert P, Israsena N, Chuangsuwanich E, Sriswasdi S. Label-free tumor cells classification using deep learning and high-content imaging. Sci Data 2023; 10:570. [PMID: 37634014 PMCID: PMC10460430 DOI: 10.1038/s41597-023-02482-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 08/16/2023] [Indexed: 08/28/2023] Open
Abstract
Many studies have shown that cellular morphology can be used to distinguish spiked-in tumor cells in blood sample background. However, most validation experiments included only homogeneous cell lines and inadequately captured the broad morphological heterogeneity of cancer cells. Furthermore, normal, non-blood cells could be erroneously classified as cancer because their morphology differ from blood cells. Here, we constructed a dataset of microscopic images of organoid-derived cancer and normal cell with diverse morphology and developed a proof-of-concept deep learning model that can distinguish cancer cells from normal cells within an unlabeled microscopy image. In total, more than 75,000 organoid-drived cells from 3 cholangiocarcinoma patients were collected. The model achieved an area under the receiver operating characteristics curve (AUROC) of 0.78 and can generalize to cell images from an unseen patient. These resources serve as a foundation for an automated, robust platform for circulating tumor cell detection.
Collapse
Affiliation(s)
- Chawan Piansaddhayanon
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330, Thailand
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
- Chula Intelligent and Complex Systems, Faculty of Science, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Chonnuttida Koracharkornradt
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Napat Laosaengpha
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330, Thailand
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Qingyi Tao
- NVIDIA AI Technology Center, Singapore, Singapore
| | - Praewphan Ingrungruanglert
- Center of Excellence for Stem Cell and Cell Therapy, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Nipan Israsena
- Center of Excellence for Stem Cell and Cell Therapy, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
- Department of Pharmacology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
| | - Ekapol Chuangsuwanich
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330, Thailand.
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
| | - Sira Sriswasdi
- Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
- Center for Artificial Intelligence in Medicine, Research Affairs, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand.
| |
Collapse
|
37
|
Körber N. MIA is an open-source standalone deep learning application for microscopic image analysis. CELL REPORTS METHODS 2023; 3:100517. [PMID: 37533647 PMCID: PMC10391334 DOI: 10.1016/j.crmeth.2023.100517] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 02/10/2023] [Accepted: 06/02/2023] [Indexed: 08/04/2023]
Abstract
In recent years, the amount of data generated by imaging techniques has grown rapidly, along with increasing computational power and the development of deep learning algorithms. To address the need for powerful automated image analysis tools for a broad range of applications in the biomedical sciences, the Microscopic Image Analyzer (MIA) was developed. MIA combines a graphical user interface that obviates the need for programming skills with state-of-the-art deep-learning algorithms for segmentation, object detection, and classification. It runs as a standalone, platform-independent application and uses open data formats, which are compatible with commonly used open-source software packages. The software provides a unified interface for easy image labeling, model training, and inference. Furthermore, the software was evaluated in a public competition and performed among the top three for all tested datasets.
Collapse
Affiliation(s)
- Nils Körber
- German Federal Institute for Risk Assessment (BfR), German Centre for the Protection of Laboratory Animals (Bf3R), Berlin, Germany
| |
Collapse
|
38
|
Feshki M, Martel S, De Koninck Y, Gosselin B. Improving flat fluorescence microscopy in scattering tissue through deep learning strategies. OPTICS EXPRESS 2023; 31:23008-23026. [PMID: 37475396 DOI: 10.1364/oe.489677] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 05/24/2023] [Indexed: 07/22/2023]
Abstract
Intravital microscopy in small animals growingly contributes to the visualization of short- and long-term mammalian biological processes. Miniaturized fluorescence microscopy has revolutionized the observation of live animals' neural circuits. The technology's ability to further miniaturize to improve freely moving experimental settings is limited by its standard lens-based layout. Typical miniature microscope designs contain a stack of heavy and bulky optical components adjusted at relatively long distances. Computational lensless microscopy can overcome this limitation by replacing the lenses with a simple thin mask. Among other critical applications, Flat Fluorescence Microscope (FFM) holds promise to allow for real-time brain circuits imaging in freely moving animals, but recent research reports show that the quality needs to be improved, compared with imaging in clear tissue, for instance. Although promising results were reported with mask-based fluorescence microscopes in clear tissues, the impact of light scattering in biological tissue remains a major challenge. The outstanding performance of deep learning (DL) networks in computational flat cameras and imaging through scattering media studies motivates the development of deep learning models for FFMs. Our holistic ray-tracing and Monte Carlo FFM computational model assisted us in evaluating deep scattering medium imaging with DL techniques. We demonstrate that physics-based DL models combined with the classical reconstruction technique of the alternating direction method of multipliers (ADMM) perform a fast and robust image reconstruction, particularly in the scattering medium. The structural similarity indexes of the reconstructed images in scattering media recordings were increased by up to 20% compared with the prevalent iterative models. We also introduce and discuss the challenges of DL approaches for FFMs under physics-informed supervised and unsupervised learning.
Collapse
|
39
|
Affiliation(s)
- Jun Ma
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| | - Bo Wang
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada.
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada.
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada.
- AI Hub, University Health Network, Toronto, Ontario, Canada.
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
40
|
Schilling MP, Klinger L, Schumacher U, Schmelzer S, Lopez MB, Nestler B, Reischl M. AI 2Seg: A Method and Tool for AI-based Annotation Inspection of Biomedical Instance Segmentation Datasets. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-6. [PMID: 38083322 DOI: 10.1109/embc40787.2023.10341074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
In biomedical engineering, deep neural networks are commonly used for the diagnosis and assessment of diseases through the interpretation of medical images. The effectiveness of these networks relies heavily on the availability of annotated datasets for training. However, obtaining noise-free and consistent annotations from experts, such as pathologists, radiologists, and biologists, remains a significant challenge. One common task in clinical practice and biological imaging applications is instance segmentation. Though, there is currently a lack of methods and open-source tools for the automated inspection of biomedical instance segmentation datasets concerning noisy annotations. To address this issue, we propose a novel deep learning-based approach for inspecting noisy annotations and provide an accompanying software implementation, AI2Seg, to facilitate its use by domain experts. The performance of the proposed algorithm is demonstrated on the medical MoNuSeg dataset and the biological LIVECell dataset.
Collapse
|
41
|
Feshki M, De Koninck Y, Gosselin B. Deep Learning Empowered Fresnel-based Lensless Fluorescence Microscopy . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082985 DOI: 10.1109/embc40787.2023.10339990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Miniaturized fluorescence microscopy has revolutionized the way neuroscientists study the brain in-vivo. Recent developments in computational lensless imaging promise a next generation of miniaturized microscopes in lensless fluorescence microscopy. We developed a microscope prototype using an optimized Fresnel amplitude mask. While many lensless imaging modalities have reported excellent performance using Deep Learning (DL) approaches, DL application in fluorescence imaging has been left untouched. We generated a computational dataset based on experimental system calibration to evaluate DL capabilities on biological cell morphologies. We show that our DL-assisted microscope can provide high-quality imaging with a structural similarity index of 89%. The least absolute error was decreased by 63% using the DL-assisted method compared with the classical models. The state-of-the-art performance of this prototype enhances the expected potential of amplitude masks in lensless microscopy applications, which are critical for robust in-vivo flat microscopy with engineered image sensors.Clinical Relevance- This study aids in advancing miniaturized fluorescence microscopy, which greatly impacts long-term brain circuit and disease studies in freely moving animal models.
Collapse
|
42
|
Schilling MP, El Khaled El Faraj R, Urrutia Gómez JE, Sonnentag SJ, Wang F, Nestler B, Orian-Rousseau V, Popova AA, Levkin PA, Reischl M. Automated high-throughput image processing as part of the screening platform for personalized oncology. Sci Rep 2023; 13:5107. [PMID: 36991084 PMCID: PMC10060403 DOI: 10.1038/s41598-023-32144-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 03/23/2023] [Indexed: 03/31/2023] Open
Abstract
Cancer is a devastating disease and the second leading cause of death worldwide. However, the development of resistance to current therapies is making cancer treatment more difficult. Combining the multi-omics data of individual tumors with information on their in-vitro Drug Sensitivity and Resistance Test (DSRT) can help to determine the appropriate therapy for each patient. Miniaturized high-throughput technologies, such as the droplet microarray, enable personalized oncology. We are developing a platform that incorporates DSRT profiling workflows from minute amounts of cellular material and reagents. Experimental results often rely on image-based readout techniques, where images are often constructed in grid-like structures with heterogeneous image processing targets. However, manual image analysis is time-consuming, not reproducible, and impossible for high-throughput experiments due to the amount of data generated. Therefore, automated image processing solutions are an essential component of a screening platform for personalized oncology. We present our comprehensive concept that considers assisted image annotation, algorithms for image processing of grid-like high-throughput experiments, and enhanced learning processes. In addition, the concept includes the deployment of processing pipelines. Details of the computation and implementation are presented. In particular, we outline solutions for linking automated image processing for personalized oncology with high-performance computing. Finally, we demonstrate the advantages of our proposal, using image data from heterogeneous practical experiments and challenges.
Collapse
Affiliation(s)
- Marcel P Schilling
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany.
| | - Razan El Khaled El Faraj
- Institute of Biological and Chemical Systems - Functional Molecular Systems, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| | - Joaquín Eduardo Urrutia Gómez
- Institute of Biological and Chemical Systems - Functional Molecular Systems, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| | - Steffen J Sonnentag
- Institute of Biological and Chemical Systems - Functional Molecular Systems, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| | - Fei Wang
- Institute for Applied Materials, Karlsruhe Institute of Technology, 76131, Karlsruhe, Germany
| | - Britta Nestler
- Institute for Applied Materials, Karlsruhe Institute of Technology, 76131, Karlsruhe, Germany
| | - Véronique Orian-Rousseau
- Institute of Biological and Chemical Systems - Functional Molecular Systems, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| | - Anna A Popova
- Institute of Biological and Chemical Systems - Functional Molecular Systems, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| | - Pavel A Levkin
- Institute of Biological and Chemical Systems - Functional Molecular Systems, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| | - Markus Reischl
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| |
Collapse
|
43
|
Trizna EY, Sinitca AM, Lyanova AI, Baidamshina DR, Zelenikhin PV, Kaplun DI, Kayumov AR, Bogachev MI. Brightfield vs Fluorescent Staining Dataset-A Test Bed Image Set for Machine Learning based Virtual Staining. Sci Data 2023; 10:160. [PMID: 36949058 PMCID: PMC10033900 DOI: 10.1038/s41597-023-02065-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/10/2023] [Indexed: 03/24/2023] Open
Abstract
Differential fluorescent staining is an effective tool widely adopted for the visualization, segmentation and quantification of cells and cellular substructures as a part of standard microscopic imaging protocols. Incompatibility of staining agents with viable cells represents major and often inevitable limitations to its applicability in live experiments, requiring extraction of samples at different stages of experiment increasing laboratory costs. Accordingly, development of computerized image analysis methodology capable of segmentation and quantification of cells and cellular substructures from plain monochromatic images obtained by light microscopy without help of any physical markup techniques is of considerable interest. The enclosed set contains human colon adenocarcinoma Caco-2 cells microscopic images obtained under various imaging conditions with different viable vs non-viable cells fractions. Each field of view is provided in a three-fold representation, including phase-contrast microscopy and two differential fluorescent microscopy images with specific markup of viable and non-viable cells, respectively, produced using two different staining schemes, representing a prominent test bed for the validation of image analysis methods.
Collapse
Affiliation(s)
- Elena Y Trizna
- Institute for Fundamental Medicine and Biology, Kazan Federal University, Kazan, 420008, Russia
| | - Aleksandr M Sinitca
- Centre for Digital Telecommunication Technologies, St. Petersburg Electrotechnical University "LETI", St. Petersburg, 197022, Russia
| | - Asya I Lyanova
- Centre for Digital Telecommunication Technologies, St. Petersburg Electrotechnical University "LETI", St. Petersburg, 197022, Russia
| | - Diana R Baidamshina
- Institute for Fundamental Medicine and Biology, Kazan Federal University, Kazan, 420008, Russia
| | - Pavel V Zelenikhin
- Institute for Fundamental Medicine and Biology, Kazan Federal University, Kazan, 420008, Russia
| | - Dmitrii I Kaplun
- Centre for Digital Telecommunication Technologies, St. Petersburg Electrotechnical University "LETI", St. Petersburg, 197022, Russia
| | - Airat R Kayumov
- Institute for Fundamental Medicine and Biology, Kazan Federal University, Kazan, 420008, Russia.
| | - Mikhail I Bogachev
- Institute for Fundamental Medicine and Biology, Kazan Federal University, Kazan, 420008, Russia.
- Centre for Digital Telecommunication Technologies, St. Petersburg Electrotechnical University "LETI", St. Petersburg, 197022, Russia.
| |
Collapse
|
44
|
Mencattini A, D'Orazio M, Casti P, Comes MC, Di Giuseppe D, Antonelli G, Filippi J, Corsi F, Ghibelli L, Veith I, Di Natale C, Parrini MC, Martinelli E. Deep-Manager: a versatile tool for optimal feature selection in live-cell imaging analysis. Commun Biol 2023; 6:241. [PMID: 36869080 PMCID: PMC9984362 DOI: 10.1038/s42003-023-04585-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 02/13/2023] [Indexed: 03/05/2023] Open
Abstract
One of the major problems in bioimaging, often highly underestimated, is whether features extracted for a discrimination or regression task will remain valid for a broader set of similar experiments or in the presence of unpredictable perturbations during the image acquisition process. Such an issue is even more important when it is addressed in the context of deep learning features due to the lack of a priori known relationship between the black-box descriptors (deep features) and the phenotypic properties of the biological entities under study. In this regard, the widespread use of descriptors, such as those coming from pre-trained Convolutional Neural Networks (CNNs), is hindered by the fact that they are devoid of apparent physical meaning and strongly subjected to unspecific biases, i.e., features that do not depend on the cell phenotypes, but rather on acquisition artifacts, such as brightness or texture changes, focus shifts, autofluorescence or photobleaching. The proposed Deep-Manager software platform offers the possibility to efficiently select those features having lower sensitivity to unspecific disturbances and, at the same time, a high discriminating power. Deep-Manager can be used in the context of both handcrafted and deep features. The unprecedented performances of the method are proven using five different case studies, ranging from selecting handcrafted green fluorescence protein intensity features in chemotherapy-related breast cancer cell death investigation to addressing problems related to the context of Deep Transfer Learning. Deep-Manager, freely available at https://github.com/BEEuniroma2/Deep-Manager , is suitable for use in many fields of bioimaging and is conceived to be constantly upgraded with novel image acquisition perturbations and modalities.
Collapse
Affiliation(s)
- A Mencattini
- Department of Electronic Engineering, University of Rome Tor Vergata, 00133, Rome, Italy
- Interdisciplinary Center for Advanced Studies on Lab-on-Chip and Organ-on-Chip Applications (IC-LOC), University of Rome Tor Vergata, 00133, Rome, Italy
| | - M D'Orazio
- Department of Electronic Engineering, University of Rome Tor Vergata, 00133, Rome, Italy
- Interdisciplinary Center for Advanced Studies on Lab-on-Chip and Organ-on-Chip Applications (IC-LOC), University of Rome Tor Vergata, 00133, Rome, Italy
| | - P Casti
- Department of Electronic Engineering, University of Rome Tor Vergata, 00133, Rome, Italy
- Interdisciplinary Center for Advanced Studies on Lab-on-Chip and Organ-on-Chip Applications (IC-LOC), University of Rome Tor Vergata, 00133, Rome, Italy
| | - M C Comes
- Department of Electronic Engineering, University of Rome Tor Vergata, 00133, Rome, Italy
- Interdisciplinary Center for Advanced Studies on Lab-on-Chip and Organ-on-Chip Applications (IC-LOC), University of Rome Tor Vergata, 00133, Rome, Italy
| | - D Di Giuseppe
- Department of Electronic Engineering, University of Rome Tor Vergata, 00133, Rome, Italy
- Interdisciplinary Center for Advanced Studies on Lab-on-Chip and Organ-on-Chip Applications (IC-LOC), University of Rome Tor Vergata, 00133, Rome, Italy
| | - G Antonelli
- Department of Electronic Engineering, University of Rome Tor Vergata, 00133, Rome, Italy
- Interdisciplinary Center for Advanced Studies on Lab-on-Chip and Organ-on-Chip Applications (IC-LOC), University of Rome Tor Vergata, 00133, Rome, Italy
| | - J Filippi
- Department of Electronic Engineering, University of Rome Tor Vergata, 00133, Rome, Italy
- Interdisciplinary Center for Advanced Studies on Lab-on-Chip and Organ-on-Chip Applications (IC-LOC), University of Rome Tor Vergata, 00133, Rome, Italy
| | - F Corsi
- Interdisciplinary Center for Advanced Studies on Lab-on-Chip and Organ-on-Chip Applications (IC-LOC), University of Rome Tor Vergata, 00133, Rome, Italy
- Department of Biology, University of Rome Tor Vergata, 00133, Rome, Italy
| | - L Ghibelli
- Department of Biology, University of Rome Tor Vergata, 00133, Rome, Italy
| | - I Veith
- Inserm U830, Stress and Cancer Lab, Institut Curie, Centre de Recherche, Paris Sciences et Lettres Research University, 75005, Paris, France
| | - C Di Natale
- Department of Electronic Engineering, University of Rome Tor Vergata, 00133, Rome, Italy
| | - M C Parrini
- Inserm U830, Stress and Cancer Lab, Institut Curie, Centre de Recherche, Paris Sciences et Lettres Research University, 75005, Paris, France
| | - E Martinelli
- Department of Electronic Engineering, University of Rome Tor Vergata, 00133, Rome, Italy.
- Interdisciplinary Center for Advanced Studies on Lab-on-Chip and Organ-on-Chip Applications (IC-LOC), University of Rome Tor Vergata, 00133, Rome, Italy.
| |
Collapse
|
45
|
Shrestha P, Kuang N, Yu J. Efficient end-to-end learning for cell segmentation with machine generated weak annotations. Commun Biol 2023; 6:232. [PMID: 36864076 PMCID: PMC9981753 DOI: 10.1038/s42003-023-04608-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 02/16/2023] [Indexed: 03/04/2023] Open
Abstract
Automated cell segmentation from optical microscopy images is usually the first step in the pipeline of single-cell analysis. Recently, deep-learning based algorithms have shown superior performances for the cell segmentation tasks. However, a disadvantage of deep-learning is the requirement for a large amount of fully annotated training data, which is costly to generate. Weakly-supervised and self-supervised learning is an active research area, but often the model accuracy is inversely correlated with the amount of annotation information provided. Here we focus on a specific subtype of weak annotations, which can be generated programmably from experimental data, thus allowing for more annotation information content without sacrificing the annotation speed. We designed a new model architecture for end-to-end training using such incomplete annotations. We have benchmarked our method on a variety of publicly available datasets, covering both fluorescence and bright-field imaging modality. We additionally tested our method on a microscopy dataset generated by us, using machine-generated annotations. The results demonstrated that our models trained under weak supervision can achieve segmentation accuracy competitive to, and in some cases, surpassing, state-of-the-art models trained under full supervision. Therefore, our method can be a practical alternative to the established full-supervision methods.
Collapse
Affiliation(s)
- Prem Shrestha
- UConn Health, 263 Farmington Ave, Farmington, CT, USA
| | | | - Ji Yu
- UConn Health, 263 Farmington Ave, Farmington, CT, USA.
| |
Collapse
|
46
|
Siu DMD, Lee KCM, Chung BMF, Wong JSJ, Zheng G, Tsia KK. Optofluidic imaging meets deep learning: from merging to emerging. LAB ON A CHIP 2023; 23:1011-1033. [PMID: 36601812 DOI: 10.1039/d2lc00813k] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Propelled by the striking advances in optical microscopy and deep learning (DL), the role of imaging in lab-on-a-chip has dramatically been transformed from a silo inspection tool to a quantitative "smart" engine. A suite of advanced optical microscopes now enables imaging over a range of spatial scales (from molecules to organisms) and temporal window (from microseconds to hours). On the other hand, the staggering diversity of DL algorithms has revolutionized image processing and analysis at the scale and complexity that were once inconceivable. Recognizing these exciting but overwhelming developments, we provide a timely review of their latest trends in the context of lab-on-a-chip imaging, or coined optofluidic imaging. More importantly, here we discuss the strengths and caveats of how to adopt, reinvent, and integrate these imaging techniques and DL algorithms in order to tailor different lab-on-a-chip applications. In particular, we highlight three areas where the latest advances in lab-on-a-chip imaging and DL can form unique synergisms: image formation, image analytics and intelligent image-guided autonomous lab-on-a-chip. Despite the on-going challenges, we anticipate that they will represent the next frontiers in lab-on-a-chip imaging that will spearhead new capabilities in advancing analytical chemistry research, accelerating biological discovery, and empowering new intelligent clinical applications.
Collapse
Affiliation(s)
- Dickson M D Siu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
| | - Kelvin C M Lee
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
| | - Bob M F Chung
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| | - Justin S J Wong
- Conzeb Limited, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| | - Guoan Zheng
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT, USA
| | - Kevin K Tsia
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| |
Collapse
|
47
|
Sims Z, Strgar L, Thirumalaisamy D, Heussner R, Thibault G, Chang YH. SEG: Segmentation Evaluation in absence of Ground truth labels. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.23.529809. [PMID: 36865198 PMCID: PMC9980141 DOI: 10.1101/2023.02.23.529809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/03/2023]
Abstract
Identifying individual cells or nuclei is often the first step in the analysis of multiplex tissue imaging (MTI) data. Recent efforts to produce plug-and-play, end-to-end MTI analysis tools such as MCMICRO1- though groundbreaking in their usability and extensibility - are often unable to provide users guidance regarding the most appropriate models for their segmentation task among an endless proliferation of novel segmentation methods. Unfortunately, evaluating segmentation results on a user's dataset without ground truth labels is either purely subjective or eventually amounts to the task of performing the original, time-intensive annotation. As a consequence, researchers rely on models pre-trained on other large datasets for their unique tasks. Here, we propose a methodological approach for evaluating MTI nuclei segmentation methods in absence of ground truth labels by scoring relatively to a larger ensemble of segmentations. To avoid potential sensitivity to collective bias from the ensemble approach, we refine the ensemble via weighted average across segmentation methods, which we derive from a systematic model ablation study. First, we demonstrate a proof-of-concept and the feasibility of the proposed approach to evaluate segmentation performance in a small dataset with ground truth annotation. To validate the ensemble and demonstrate the importance of our method-specific weighting, we compare the ensemble's detection and pixel-level predictions - derived without supervision - with the data's ground truth labels. Second, we apply the methodology to an unlabeled larger tissue microarray (TMA) dataset, which includes a diverse set of breast cancer phenotypes, and provides decision guidelines for the general user to more easily choose the most suitable segmentation methods for their own dataset by systematically evaluating the performance of individual segmentation approaches in the entire dataset.
Collapse
Affiliation(s)
- Zachary Sims
- Department of Biomedical Engineering and Computational Biology Program, Oregon Health and Science University (OHSU), OR
| | - Luke Strgar
- Department of Biomedical Engineering and Computational Biology Program, Oregon Health and Science University (OHSU), OR
| | - Dharani Thirumalaisamy
- Department of Biomedical Engineering and Computational Biology Program, Oregon Health and Science University (OHSU), OR
| | - Robert Heussner
- Department of Biomedical Engineering and Computational Biology Program, Oregon Health and Science University (OHSU), OR
| | - Guillaume Thibault
- Department of Biomedical Engineering and Computational Biology Program, Oregon Health and Science University (OHSU), OR
| | - Young Hwan Chang
- Department of Biomedical Engineering and Computational Biology Program, Oregon Health and Science University (OHSU), OR
| |
Collapse
|
48
|
Vanrobaeys Y, Peterson ZJ, Walsh EN, Chatterjee S, Lin LC, Lyons LC, Nickl-Jockschat T, Abel T. Spatial transcriptomics reveals unique gene expression changes in different brain regions after sleep deprivation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.18.524406. [PMID: 36712009 PMCID: PMC9882298 DOI: 10.1101/2023.01.18.524406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Sleep deprivation has far-reaching consequences on the brain and behavior, impacting memory, attention, and metabolism. Previous research has focused on gene expression changes in individual brain regions, such as the hippocampus or cortex. Therefore, it is unclear how uniformly or heterogeneously sleep loss affects the brain. Here, we use spatial transcriptomics to define the impact of a brief period of sleep deprivation across the brain. We find that sleep deprivation induced pronounced differences in gene expression across the brain, with the greatest changes in the hippocampus, neocortex, hypothalamus, and thalamus. Both the differentially expressed genes and the direction of regulation differed markedly across regions. Importantly, we developed bioinformatic tools to register tissue sections and gene expression data into a common anatomical space, allowing a brain-wide comparison of gene expression patterns between samples. Our results suggest that distinct molecular mechanisms acting in discrete brain regions underlie the biological effects of sleep deprivation.
Collapse
Affiliation(s)
- Yann Vanrobaeys
- Interdisciplinary Graduate Program in Genetics, University of Iowa, 357 Medical Research Center Iowa City, IA 52242, USA
- Iowa Neuroscience Institute, Carver College of Medicine, 169 Newton Road, 2312 Pappajohn Biomedical Discovery Building, University of Iowa, Iowa City, IA 52242, USA
- Department of Neuroscience and Pharmacology, Carver College of Medicine, 51 Newton Road, 2-417B Bowen Science Building, University of Iowa, Iowa City, IA 52242, USA
| | - Zeru J. Peterson
- Iowa Neuroscience Institute, Carver College of Medicine, 169 Newton Road, 2312 Pappajohn Biomedical Discovery Building, University of Iowa, Iowa City, IA 52242, USA
- Department of Psychiatry, University of Iowa, Iowa City, IA, USA
| | - Emily. N. Walsh
- Iowa Neuroscience Institute, Carver College of Medicine, 169 Newton Road, 2312 Pappajohn Biomedical Discovery Building, University of Iowa, Iowa City, IA 52242, USA
- Department of Neuroscience and Pharmacology, Carver College of Medicine, 51 Newton Road, 2-417B Bowen Science Building, University of Iowa, Iowa City, IA 52242, USA
- Interdisciplinary Graduate Program in Neuroscience, University of Iowa, 356 Medical Research Center, Iowa City, IA 52242, USA
| | - Snehajyoti Chatterjee
- Iowa Neuroscience Institute, Carver College of Medicine, 169 Newton Road, 2312 Pappajohn Biomedical Discovery Building, University of Iowa, Iowa City, IA 52242, USA
- Department of Neuroscience and Pharmacology, Carver College of Medicine, 51 Newton Road, 2-417B Bowen Science Building, University of Iowa, Iowa City, IA 52242, USA
| | - Li-Chun Lin
- Iowa Neuroscience Institute, Carver College of Medicine, 169 Newton Road, 2312 Pappajohn Biomedical Discovery Building, University of Iowa, Iowa City, IA 52242, USA
- Department of Neuroscience and Pharmacology, Carver College of Medicine, 51 Newton Road, 2-417B Bowen Science Building, University of Iowa, Iowa City, IA 52242, USA
- Department of Neurology, University of Iowa, Iowa City, IA, USA
| | - Lisa C. Lyons
- Program in Neuroscience, Department of Biological Science, Florida State University, Tallahassee, FL, USA
| | - Thomas Nickl-Jockschat
- Iowa Neuroscience Institute, Carver College of Medicine, 169 Newton Road, 2312 Pappajohn Biomedical Discovery Building, University of Iowa, Iowa City, IA 52242, USA
- Department of Neuroscience and Pharmacology, Carver College of Medicine, 51 Newton Road, 2-417B Bowen Science Building, University of Iowa, Iowa City, IA 52242, USA
- Department of Psychiatry, University of Iowa, Iowa City, IA, USA
| | - Ted Abel
- Iowa Neuroscience Institute, Carver College of Medicine, 169 Newton Road, 2312 Pappajohn Biomedical Discovery Building, University of Iowa, Iowa City, IA 52242, USA
- Department of Neuroscience and Pharmacology, Carver College of Medicine, 51 Newton Road, 2-417B Bowen Science Building, University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|
49
|
A novel feature for monitoring the enzymatic harvesting process of adherent cell cultures based on lens-free imaging. Sci Rep 2022; 12:22202. [PMID: 36564377 PMCID: PMC9789138 DOI: 10.1038/s41598-022-22561-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 10/17/2022] [Indexed: 12/24/2022] Open
Abstract
Adherent cell cultures are often dissociated from their culture vessel (and each other) through enzymatic harvesting, where the detachment response is monitored by an operator. However, this approach is lacking standardisation and reproducibility, and prolonged exposure or too high concentrations can affect the cell's viability and differentiation potential. Quantitative monitoring systems are required to characterise the cell detachment response and objectively determine the optimal time-point to inhibit the enzymatic reaction. State-of-the-art methodologies rely on bulky imaging systems and/or features (e.g. circularity) that lack robustness. In this study, lens-free imaging (LFI) technology was used to develop a novel cell detachment feature. Seven different donors were cultured and subsequently harvested with a (diluted) enzymatic harvesting solution after 3, 5 and 7 days of culture. Cell detachment was captured with the LFI set-up over a period of 20 min (every 20 s) and by optimising the reconstruction of the LFI intensity images, a new feature could be identified. Bright regions in the intensity image were identified as detaching cells and using image analysis, a method was developed to automatically extract this feature, defined as the percentage of detached cell regions. Next, the method was quantitatively and qualitatively validated on a diverse set of images. Average absolute error values of 1.49%, 1.34% and 1.97% were obtained for medium to high density and overconfluent cultures, respectively. The detachment response was quantified for all conditions and the optimal time for enzyme inhibition was reached when approximately 92.5% of the cells were detached. On average, inhibition times of 9.6-11.1 and 16.2-17.2 min were obtained for medium to high density and overconfluent cultures, respectively. In general, overconfluent cultures detached much slower, while their detachment rate was also decreased by the diluted harvesting solution. Moreover, several donors exhibited similar trends in cell detachment behaviour, with two clear outliers. Using the novel feature, measurements can be performed with an increased robustness, while the compact LFI design could pave the way for in situ monitoring in a variety of culture vessels, including bioreactors.
Collapse
|
50
|
A cellular segmentation algorithm with fast customization. Nat Methods 2022; 19:1536-1537. [PMID: 36344836 DOI: 10.1038/s41592-022-01664-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|