1
|
Mahbod A, Dorffner G, Ellinger I, Woitek R, Hatamikia S. Improving generalization capability of deep learning-based nuclei instance segmentation by non-deterministic train time and deterministic test time stain normalization. Comput Struct Biotechnol J 2024; 23:669-678. [PMID: 38292472 PMCID: PMC10825317 DOI: 10.1016/j.csbj.2023.12.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 12/26/2023] [Accepted: 12/26/2023] [Indexed: 02/01/2024] Open
Abstract
With the advent of digital pathology and microscopic systems that can scan and save whole slide histological images automatically, there is a growing trend to use computerized methods to analyze acquired images. Among different histopathological image analysis tasks, nuclei instance segmentation plays a fundamental role in a wide range of clinical and research applications. While many semi- and fully-automatic computerized methods have been proposed for nuclei instance segmentation, deep learning (DL)-based approaches have been shown to deliver the best performances. However, the performance of such approaches usually degrades when tested on unseen datasets. In this work, we propose a novel method to improve the generalization capability of a DL-based automatic segmentation approach. Besides utilizing one of the state-of-the-art DL-based models as a baseline, our method incorporates non-deterministic train time and deterministic test time stain normalization, and ensembling to boost the segmentation performance. We trained the model with one single training set and evaluated its segmentation performance on seven test datasets. Our results show that the proposed method provides up to 4.9%, 5.4%, and 5.9% better average performance in segmenting nuclei based on Dice score, aggregated Jaccard index, and panoptic quality score, respectively, compared to the baseline segmentation model.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
| | - Ramona Woitek
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria
| |
Collapse
|
2
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
3
|
Wu X, Xu Z, Tong RKY. Continual learning in medical image analysis: A survey. Comput Biol Med 2024; 182:109206. [PMID: 39332115 DOI: 10.1016/j.compbiomed.2024.109206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 06/24/2024] [Accepted: 09/22/2024] [Indexed: 09/29/2024]
Abstract
In the dynamic realm of practical clinical scenarios, Continual Learning (CL) has gained increasing interest in medical image analysis due to its potential to address major challenges associated with data privacy, model adaptability, memory inefficiency, prediction robustness and detection accuracy. In general, the primary challenge in adapting and advancing CL remains catastrophic forgetting. Beyond this challenge, recent years have witnessed a growing body of work that expands our comprehension and application of continual learning in the medical domain, highlighting its practical significance and intricacy. In this paper, we present an in-depth and up-to-date review of the application of CL in medical image analysis. Our discussion delves into the strategies employed to address specific tasks within the medical domain, categorizing existing CL methods into three settings: Task-Incremental Learning, Class-Incremental Learning, and Domain-Incremental Learning. These settings are further subdivided based on representative learning strategies, allowing us to assess their strengths and weaknesses in the context of various medical scenarios. By establishing a correlation between each medical challenge and the corresponding insights provided by CL, we provide a comprehensive understanding of the potential impact of these techniques. To enhance the utility of our review, we provide an overview of the commonly used benchmark medical datasets and evaluation metrics in the field. Through a comprehensive comparison, we discuss promising future directions for the application of CL in medical image analysis. A comprehensive list of studies is being continuously updated at https://github.com/xw1519/Continual-Learning-Medical-Adaptation.
Collapse
Affiliation(s)
- Xinyao Wu
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China.
| | - Zhe Xu
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China; Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Raymond Kai-Yu Tong
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China.
| |
Collapse
|
4
|
Zeng X, Abdullah N, Sumari P. Self-supervised learning framework application for medical image analysis: a review and summary. Biomed Eng Online 2024; 23:107. [PMID: 39465395 PMCID: PMC11514943 DOI: 10.1186/s12938-024-01299-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 10/17/2024] [Indexed: 10/29/2024] Open
Abstract
Manual annotation of medical image datasets is labor-intensive and prone to biases. Moreover, the rate at which image data accumulates significantly outpaces the speed of manual annotation, posing a challenge to the advancement of machine learning, particularly in the realm of supervised learning. Self-supervised learning is an emerging field that capitalizes on unlabeled data for training, thereby circumventing the need for extensive manual labeling. This learning paradigm generates synthetic pseudo-labels through pretext tasks, compelling the network to acquire image representations in a pseudo-supervised manner and subsequently fine-tuning with a limited set of annotated data to achieve enhanced performance. This review begins with an overview of prevalent types and advancements in self-supervised learning, followed by an exhaustive and systematic examination of methodologies within the medical imaging domain from 2018 to September 2024. The review encompasses a range of medical image modalities, including CT, MRI, X-ray, Histology, and Ultrasound. It addresses specific tasks, such as Classification, Localization, Segmentation, Reduction of False Positives, Improvement of Model Performance, and Enhancement of Image Quality. The analysis reveals a descending order in the volume of related studies, with CT and MRI leading the list, followed by X-ray, Histology, and Ultrasound. Except for CT and MRI, there is a greater prevalence of studies focusing on contrastive learning methods over generative learning approaches. The performance of MRI/Ultrasound classification and all image types segmentation still has room for further exploration. Generally, this review can provide conceptual guidance for medical professionals to combine self-supervised learning with their research.
Collapse
Affiliation(s)
- Xiangrui Zeng
- School of Computer Sciences, Universiti Sains Malaysia, USM, 11800, Pulau Pinang, Malaysia.
| | - Nibras Abdullah
- Faculty of Computer Studies, Arab Open University, Jeddah, Saudi Arabia.
| | - Putra Sumari
- School of Computer Sciences, Universiti Sains Malaysia, USM, 11800, Pulau Pinang, Malaysia
| |
Collapse
|
5
|
Nunes JD, Montezuma D, Oliveira D, Pereira T, Cardoso JS. A survey on cell nuclei instance segmentation and classification: Leveraging context and attention. Med Image Anal 2024; 99:103360. [PMID: 39383642 DOI: 10.1016/j.media.2024.103360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 08/26/2024] [Accepted: 09/27/2024] [Indexed: 10/11/2024]
Abstract
Nuclear-derived morphological features and biomarkers provide relevant insights regarding the tumour microenvironment, while also allowing diagnosis and prognosis in specific cancer types. However, manually annotating nuclei from the gigapixel Haematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs) is a laborious and costly task, meaning automated algorithms for cell nuclei instance segmentation and classification could alleviate the workload of pathologists and clinical researchers and at the same time facilitate the automatic extraction of clinically interpretable features for artificial intelligence (AI) tools. But due to high intra- and inter-class variability of nuclei morphological and chromatic features, as well as H&E-stains susceptibility to artefacts, state-of-the-art algorithms cannot correctly detect and classify instances with the necessary performance. In this work, we hypothesize context and attention inductive biases in artificial neural networks (ANNs) could increase the performance and generalization of algorithms for cell nuclei instance segmentation and classification. To understand the advantages, use-cases, and limitations of context and attention-based mechanisms in instance segmentation and classification, we start by reviewing works in computer vision and medical imaging. We then conduct a thorough survey on context and attention methods for cell nuclei instance segmentation and classification from H&E-stained microscopy imaging, while providing a comprehensive discussion of the challenges being tackled with context and attention. Besides, we illustrate some limitations of current approaches and present ideas for future research. As a case study, we extend both a general (Mask-RCNN) and a customized (HoVer-Net) instance segmentation and classification methods with context- and attention-based mechanisms and perform a comparative analysis on a multicentre dataset for colon nuclei identification and counting. Although pathologists rely on context at multiple levels while paying attention to specific Regions of Interest (RoIs) when analysing and annotating WSIs, our findings suggest translating that domain knowledge into algorithm design is no trivial task, but to fully exploit these mechanisms in ANNs, the scientific understanding of these methods should first be addressed.
Collapse
Affiliation(s)
- João D Nunes
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, R. Dr. Roberto Frias, Porto, 4200-465, Portugal; University of Porto - Faculty of Engineering, R. Dr. Roberto Frias, Porto, 4200-465, Portugal.
| | - Diana Montezuma
- IMP Diagnostics, Praça do Bom Sucesso, 4150-146 Porto, Portugal; Cancer Biology and Epigenetics Group, Research Center of IPO Porto (CI-IPOP)/[RISE@CI-IPOP], Portuguese Oncology Institute of Porto (IPO Porto)/Porto Comprehensive Cancer Center (Porto.CCC), R. Dr. António Bernardino de Almeida, 4200-072, Porto, Portugal; Doctoral Programme in Medical Sciences, School of Medicine and Biomedical Sciences - University of Porto (ICBAS-UP), Porto, Portugal
| | | | - Tania Pereira
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, R. Dr. Roberto Frias, Porto, 4200-465, Portugal; FCTUC - Faculty of Science and Technology, University of Coimbra, Coimbra, 3004-516, Portugal
| | - Jaime S Cardoso
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, R. Dr. Roberto Frias, Porto, 4200-465, Portugal; University of Porto - Faculty of Engineering, R. Dr. Roberto Frias, Porto, 4200-465, Portugal
| |
Collapse
|
6
|
Hanna MG, Olson NH, Zarella M, Dash RC, Herrmann MD, Furtado LV, Stram MN, Raciti PM, Hassell L, Mays A, Pantanowitz L, Sirintrapun JS, Krishnamurthy S, Parwani A, Lujan G, Evans A, Glassy EF, Bui MM, Singh R, Souers RJ, de Baca ME, Seheult JN. Recommendations for Performance Evaluation of Machine Learning in Pathology: A Concept Paper From the College of American Pathologists. Arch Pathol Lab Med 2024; 148:e335-e361. [PMID: 38041522 DOI: 10.5858/arpa.2023-0042-cp] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2023] [Indexed: 12/03/2023]
Abstract
CONTEXT.— Machine learning applications in the pathology clinical domain are emerging rapidly. As decision support systems continue to mature, laboratories will increasingly need guidance to evaluate their performance in clinical practice. Currently there are no formal guidelines to assist pathology laboratories in verification and/or validation of such systems. These recommendations are being proposed for the evaluation of machine learning systems in the clinical practice of pathology. OBJECTIVE.— To propose recommendations for performance evaluation of in vitro diagnostic tests on patient samples that incorporate machine learning as part of the preanalytical, analytical, or postanalytical phases of the laboratory workflow. Topics described include considerations for machine learning model evaluation including risk assessment, predeployment requirements, data sourcing and curation, verification and validation, change control management, human-computer interaction, practitioner training, and competency evaluation. DATA SOURCES.— An expert panel performed a review of the literature, Clinical and Laboratory Standards Institute guidance, and laboratory and government regulatory frameworks. CONCLUSIONS.— Review of the literature and existing documents enabled the development of proposed recommendations. This white paper pertains to performance evaluation of machine learning systems intended to be implemented for clinical patient testing. Further studies with real-world clinical data are encouraged to support these proposed recommendations. Performance evaluation of machine learning models is critical to verification and/or validation of in vitro diagnostic tests using machine learning intended for clinical practice.
Collapse
Affiliation(s)
- Matthew G Hanna
- From the Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, New York (Hanna, Sirintrapun)
| | - Niels H Olson
- The Defense Innovation Unit, Mountain View, California (Olson)
- The Department of Pathology, Uniformed Services University, Bethesda, Maryland (Olson)
| | - Mark Zarella
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota (Zarella, Seheult)
| | - Rajesh C Dash
- Department of Pathology, Duke University Health System, Durham, North Carolina (Dash)
| | - Markus D Herrmann
- Department of Pathology, Massachusetts General Hospital and Harvard Medical School, Boston (Herrmann)
| | - Larissa V Furtado
- Department of Pathology, St. Jude Children's Research Hospital, Memphis, Tennessee (Furtado)
| | - Michelle N Stram
- The Department of Forensic Medicine, New York University, and Office of Chief Medical Examiner, New York (Stram)
| | | | - Lewis Hassell
- Department of Pathology, Oklahoma University Health Sciences Center, Oklahoma City (Hassell)
| | - Alex Mays
- The MITRE Corporation, McLean, Virginia (Mays)
| | - Liron Pantanowitz
- Department of Pathology & Clinical Labs, University of Michigan, Ann Arbor (Pantanowitz)
| | - Joseph S Sirintrapun
- From the Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, New York (Hanna, Sirintrapun)
| | | | - Anil Parwani
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus (Parwani, Lujan)
| | - Giovanni Lujan
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus (Parwani, Lujan)
| | - Andrew Evans
- Laboratory Medicine, Mackenzie Health, Toronto, Ontario, Canada (Evans)
| | - Eric F Glassy
- Affiliated Pathologists Medical Group, Rancho Dominguez, California (Glassy)
| | - Marilyn M Bui
- Departments of Pathology and Machine Learning, Moffitt Cancer Center, Tampa, Florida (Bui)
| | - Rajendra Singh
- Department of Dermatopathology, Summit Health, Summit Woodland Park, New Jersey (Singh)
| | - Rhona J Souers
- Department of Biostatistics, College of American Pathologists, Northfield, Illinois (Souers)
| | | | - Jansen N Seheult
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota (Zarella, Seheult)
| |
Collapse
|
7
|
Meng Z, Dong J, Zhang B, Li S, Wu R, Su F, Wang G, Guo L, Zhao Z. NuSEA: Nuclei Segmentation With Ellipse Annotations. IEEE J Biomed Health Inform 2024; 28:5996-6007. [PMID: 38913516 DOI: 10.1109/jbhi.2024.3418106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
OBJECTIVE Nuclei segmentation is a crucial pre-task for pathological microenvironment quantification. However, the acquisition of manually precise nuclei annotations for improving the performance of deep learning models is time-consuming and expensive. METHODS In this paper, an efficient nuclear annotation tool called NuSEA is proposed to achieve accurate nucleus segmentation, where a simple but effective ellipse annotation is applied. Specifically, the core network U-Light of NuSEA is lightweight with only 0.86 M parameters, which is suitable for real-time nuclei segmentation. In addition, an Elliptical Field Loss and a Texture Loss are proposed to enhance the edge segmentation and constrain the smoothness simultaneously. RESULTS Extensive experiments on three public datasets (MoNuSeg, CPM-17, and CoNSeP) demonstrate that NuSEA is superior to the state-of-the-art (SOTA) methods and better than existing algorithms based on point, rectangle, and text annotations. CONCLUSIONS With the assistance of NuSEA, a new dataset called NuSEA-dataset v1.0, encompassing 118,857 annotated nuclei from the whole-slide images of 12 organs is released. SIGNIFICANCE NuSEA provides a rapid and effective annotation tool for nuclei in histopathological images, benefiting future explorations in deep learning algorithms.
Collapse
|
8
|
Yoon C, Park E, Misra S, Kim JY, Baik JW, Kim KG, Jung CK, Kim C. Deep learning-based virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens. LIGHT, SCIENCE & APPLICATIONS 2024; 13:226. [PMID: 39223152 PMCID: PMC11369251 DOI: 10.1038/s41377-024-01554-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 07/08/2024] [Accepted: 07/24/2024] [Indexed: 09/04/2024]
Abstract
In pathological diagnostics, histological images highlight the oncological features of excised specimens, but they require laborious and costly staining procedures. Despite recent innovations in label-free microscopy that simplify complex staining procedures, technical limitations and inadequate histological visualization are still problems in clinical settings. Here, we demonstrate an interconnected deep learning (DL)-based framework for performing automated virtual staining, segmentation, and classification in label-free photoacoustic histology (PAH) of human specimens. The framework comprises three components: (1) an explainable contrastive unpaired translation (E-CUT) method for virtual H&E (VHE) staining, (2) an U-net architecture for feature segmentation, and (3) a DL-based stepwise feature fusion method (StepFF) for classification. The framework demonstrates promising performance at each step of its application to human liver cancers. In virtual staining, the E-CUT preserves the morphological aspects of the cell nucleus and cytoplasm, making VHE images highly similar to real H&E ones. In segmentation, various features (e.g., the cell area, number of cells, and the distance between cell nuclei) have been successfully segmented in VHE images. Finally, by using deep feature vectors from PAH, VHE, and segmented images, StepFF has achieved a 98.00% classification accuracy, compared to the 94.80% accuracy of conventional PAH classification. In particular, StepFF's classification reached a sensitivity of 100% based on the evaluation of three pathologists, demonstrating its applicability in real clinical settings. This series of DL methods for label-free PAH has great potential as a practical clinical strategy for digital pathology.
Collapse
Affiliation(s)
- Chiho Yoon
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Eunwoo Park
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Sampa Misra
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Jin Young Kim
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
- Opticho Inc., Pohang, Republic of Korea
| | - Jin Woo Baik
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Kwang Gi Kim
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon, Republic of Korea
| | - Chan Kwon Jung
- Cancer Research Institute, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
- Department of Hospital Pathology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
| | - Chulhong Kim
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Opticho Inc., Pohang, Republic of Korea.
| |
Collapse
|
9
|
Yalcinkaya DM, Youssef K, Heydari B, Wei J, Merz NB, Judd R, Dharmakumar R, Simonetti OP, Weinsaft JW, Raman SV, Sharif B. Improved Robustness for Deep Learning-based Segmentation of Multi-Center Myocardial Perfusion MRI Datasets Using Data Adaptive Uncertainty-guided Space-time Analysis. ARXIV 2024:arXiv:2408.04805v1. [PMID: 39148930 PMCID: PMC11326424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/17/2024]
Abstract
Background Fully automatic analysis of myocardial perfusion MRI datasets enables rapid and objective reporting of stress/rest studies in patients with suspected ischemic heart disease. Developing deep learning techniques that can analyze multi-center datasets despite limited training data and variations in software (pulse sequence) and hardware (scanner vendor) is an ongoing challenge. Methods Datasets from 3 medical centers acquired at 3T (n = 150 subjects; 21,150 first-pass images) were included: an internal dataset (inD; n = 95) and two external datasets (exDs; n = 55) used for evaluating the robustness of the trained deep neural network (DNN) models against differences in pulse sequence (exD-1) and scanner vendor (exD-2). A subset of inD (n = 85) was used for training/validation of a pool of DNNs for segmentation, all using the same spatiotemporal U-Net architecture and hyperparameters but with different parameter initializations. We employed a space-time sliding-patch analysis approach that automatically yields a pixel-wise "uncertainty map" as a byproduct of the segmentation process. In our approach, dubbed Data Adaptive Uncertainty-Guided Space-time (DAUGS) analysis, a given test case is segmented by all members of the DNN pool and the resulting uncertainty maps are leveraged to automatically select the "best" one among the pool of solutions. For comparison, we also trained a DNN using the established approach with the same settings (hyperparameters, data augmentation, etc.). Results The proposed DAUGS analysis approach performed similarly to the established approach on the internal dataset (Dice score for the testing subset of inD: 0.896 ± 0.050 vs. 0.890 ± 0.049; p = n.s.) whereas it significantly outperformed on the external datasets (Dice for exD-1: 0.885 ± 0.040 vs. 0.849 ± 0.065, p < 0.005; Dice for exD-2: 0.811 ± 0.070 vs. 0.728 ± 0.149, p < 0.005). Moreover, the number of image series with "failed" segmentation (defined as having myocardial contours that include bloodpool or are noncontiguous in ≥1 segment) was significantly lower for the proposed vs. the established approach (4.3% vs. 17.1%, p < 0.0005). Conclusions The proposed DAUGS analysis approach has the potential to improve the robustness of deep learning methods for segmentation of multi-center stress perfusion datasets with variations in the choice of pulse sequence, site location or scanner vendor.
Collapse
Affiliation(s)
- Dilek M. Yalcinkaya
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine, Indianapolis, IN, USA
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Khalid Youssef
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine, Indianapolis, IN, USA
- Krannert Cardiovascular Research Center, Dept. of Medicine, Indiana Univ. School of Medicine, Indianapolis, IN, USA
| | - Bobak Heydari
- Stephenson Cardiac Imaging Centre, Department of Cardiac Sciences, University of Calgary, Alberta, Canada
| | - Janet Wei
- Barbra Streisand Women’s Heart Center, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Noel Bairey Merz
- Barbra Streisand Women’s Heart Center, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Robert Judd
- Division of Cardiology, Department of Medicine, Duke University, Durham, NC, USA
| | - Rohan Dharmakumar
- Krannert Cardiovascular Research Center, Dept. of Medicine, Indiana Univ. School of Medicine, Indianapolis, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Orlando P. Simonetti
- Department of Medicine, Davis Heart and Lung Research Institute, The Ohio State University, Columbus, OH, USA
| | - Jonathan W. Weinsaft
- Division of Cardiology at NY Presbyterian Hospital, Weill Cornell Medical Center, New York, NY, USA
| | - Subha V. Raman
- Krannert Cardiovascular Research Center, Dept. of Medicine, Indiana Univ. School of Medicine, Indianapolis, IN, USA
- OhioHealth, Columbus, OH, USA
| | - Behzad Sharif
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine, Indianapolis, IN, USA
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
- Krannert Cardiovascular Research Center, Dept. of Medicine, Indiana Univ. School of Medicine, Indianapolis, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
10
|
Liu W, Zhang Q, Li Q, Wang S. Contrastive and uncertainty-aware nuclei segmentation and classification. Comput Biol Med 2024; 178:108667. [PMID: 38850962 DOI: 10.1016/j.compbiomed.2024.108667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 04/18/2024] [Accepted: 05/26/2024] [Indexed: 06/10/2024]
Abstract
Nuclei segmentation and classification play a crucial role in pathology diagnosis, enabling pathologists to analyze cellular characteristics accurately. Overlapping cluster nuclei, misdetection of small-scale nuclei, and pleomorphic nuclei-induced misclassification have always been major challenges in the nuclei segmentation and classification tasks. To this end, we introduce an auxiliary task of nuclei boundary-guided contrastive learning to enhance the representativeness and discriminative power of visual features, particularly for addressing the challenge posed by the unclear contours of adherent nuclei and small nuclei. In addition, misclassifications resulting from pleomorphic nuclei often exhibit low classification confidence, indicating a high level of uncertainty. To mitigate misclassification, we capitalize on the characteristic clustering of similar cells to propose a locality-aware class embedding module, offering a regional perspective to capture category information. Moreover, we address uncertain classification in densely aggregated nuclei by designing a top-k uncertainty attention module that leverages deep features to enhance shallow features, thereby improving the learning of contextual semantic information. We demonstrate that the proposed network outperforms the off-the-shelf methods in both nuclei segmentation and classification experiments, achieving the state-of-the-art performance.
Collapse
Affiliation(s)
- Wenxi Liu
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China.
| | - Qing Zhang
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China.
| | - Qi Li
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China.
| | - Shu Wang
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China.
| |
Collapse
|
11
|
Jain A, Laidlaw DH, Bajcsy P, Singh R. Memory-efficient semantic segmentation of large microscopy images using graph-based neural networks. Microscopy (Oxf) 2024; 73:275-286. [PMID: 37864808 DOI: 10.1093/jmicro/dfad049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 07/14/2023] [Accepted: 10/05/2023] [Indexed: 10/23/2023] Open
Abstract
We present a graph neural network (GNN)-based framework applied to large-scale microscopy image segmentation tasks. While deep learning models, like convolutional neural networks (CNNs), have become common for automating image segmentation tasks, they are limited by the image size that can fit in the memory of computational hardware. In a GNN framework, large-scale images are converted into graphs using superpixels (regions of pixels with similar color/intensity values), allowing us to input information from the entire image into the model. By converting images with hundreds of millions of pixels to graphs with thousands of nodes, we can segment large images using memory-limited computational resources. We compare the performance of GNN- and CNN-based segmentation in terms of accuracy, training time and required graphics processing unit memory. Based on our experiments with microscopy images of biological cells and cell colonies, GNN-based segmentation used one to three orders-of-magnitude fewer computational resources with only a change in accuracy of ‒2 % to +0.3 %. Furthermore, errors due to superpixel generation can be reduced by either using better superpixel generation algorithms or increasing the number of superpixels, thereby allowing for improvement in the GNN framework's accuracy. This trade-off between accuracy and computational cost over CNN models makes the GNN framework attractive for many large-scale microscopy image segmentation tasks in biology.
Collapse
Affiliation(s)
- Atishay Jain
- Department of Computer Science, Brown University, 115 Waterman Street, Providence, Rhode Island 02906, USA
| | - David H Laidlaw
- Department of Computer Science, Brown University, 115 Waterman Street, Providence, Rhode Island 02906, USA
| | - Peter Bajcsy
- Information Technology Laboratory, National Institute of Standards and Technology (NIST), 100 Bureau Drive, Gaithersburg, Maryland 20899, USA
| | - Ritambhara Singh
- Department of Computer Science, Brown University, 115 Waterman Street, Providence, Rhode Island 02906, USA
- Center for Computational Molecular Biology, Brown University, 164 Angell Street, Providence, Rhode Island 02906, USA
| |
Collapse
|
12
|
Lin Y, Wang Z, Zhang D, Cheng KT, Chen H. BoNuS: Boundary Mining for Nuclei Segmentation With Partial Point Labels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2137-2147. [PMID: 38231818 DOI: 10.1109/tmi.2024.3355068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Nuclei segmentation is a fundamental prerequisite in the digital pathology workflow. The development of automated methods for nuclei segmentation enables quantitative analysis of the wide existence and large variances in nuclei morphometry in histopathology images. However, manual annotation of tens of thousands of nuclei is tedious and time-consuming, which requires significant amount of human effort and domain-specific expertise. To alleviate this problem, in this paper, we propose a weakly-supervised nuclei segmentation method that only requires partial point labels of nuclei. Specifically, we propose a novel boundary mining framework for nuclei segmentation, named BoNuS, which simultaneously learns nuclei interior and boundary information from the point labels. To achieve this goal, we propose a novel boundary mining loss, which guides the model to learn the boundary information by exploring the pairwise pixel affinity in a multiple-instance learning manner. Then, we consider a more challenging problem, i.e., partial point label, where we propose a nuclei detection module with curriculum learning to detect the missing nuclei with prior morphological knowledge. The proposed method is validated on three public datasets, MoNuSeg, CPM, and CoNIC datasets. Experimental results demonstrate the superior performance of our method to the state-of-the-art weakly-supervised nuclei segmentation methods. Code: https://github.com/hust-linyi/bonus.
Collapse
|
13
|
Qian Z, Wang Z, Zhang X, Wei B, Lai M, Shou J, Fan Y, Xu Y. MSNSegNet: attention-based multi-shape nuclei instance segmentation in histopathology images. Med Biol Eng Comput 2024; 62:1821-1836. [PMID: 38401007 DOI: 10.1007/s11517-024-03050-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 02/13/2024] [Indexed: 02/26/2024]
Abstract
In clinical research, the segmentation of irregularly shaped nuclei, particularly in mesenchymal areas like fibroblasts, is crucial yet often neglected. These irregular nuclei are significant for assessing tissue repair in immunotherapy, a process involving neovascularization and fibroblast proliferation. Proper segmentation of these nuclei is vital for evaluating immunotherapy's efficacy, as it provides insights into pathological features. However, the challenge lies in the pronounced curvature variations of these non-convex nuclei, making their segmentation more difficult than that of regular nuclei. In this work, we introduce an undefined task to segment nuclei with both regular and irregular morphology, namely multi-shape nuclei segmentation. We propose a proposal-based method to perform multi-shape nuclei segmentation. By leveraging the two-stage structure of the proposal-based method, a powerful refinement module with high computational costs can be selectively deployed only in local regions, improving segmentation accuracy without compromising computational efficiency. We introduce a novel self-attention module to refine features in proposals for the sake of effectiveness and efficiency in the second stage. The self-attention module improves segmentation performance by capturing long-range dependencies to assist in distinguishing the foreground from the background. In this process, similar features get high attention weights while dissimilar ones get low attention weights. In the first stage, we introduce a residual attention module and a semantic-aware module to accurately predict candidate proposals. The two modules capture more interpretable features and introduce additional supervision through semantic-aware loss. In addition, we construct a dataset with a proportion of non-convex nuclei compared with existing nuclei datasets, namely the multi-shape nuclei (MsN) dataset. Our MSNSegNet method demonstrates notable improvements across various metrics compared to the second-highest-scoring methods. For all nuclei, the D i c e score improved by approximately 1.66 % , A J I by about 2.15 % , and D i c e obj by roughly 0.65 % . For non-convex nuclei, which are crucial in clinical applications, our method's A J I improved significantly by approximately 3.86 % and D i c e obj by around 2.54 % . These enhancements underscore the effectiveness of our approach on multi-shape nuclei segmentation, particularly in challenging scenarios involving irregularly shaped nuclei.
Collapse
Affiliation(s)
- Ziniu Qian
- School of Biological Science and Medical Engineering, Beihang University, Haidian District, Beijing, 100191, Beijing, China
| | - Zihua Wang
- School of Biological Science and Medical Engineering, Beihang University, Haidian District, Beijing, 100191, Beijing, China
| | - Xin Zhang
- School of Biological Science and Medical Engineering, Beihang University, Haidian District, Beijing, 100191, Beijing, China
| | - Bingzheng Wei
- Xiaomi Corporation, Haidian District, Beijing, 100085, Beijing, China
| | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang Provincial Key Laboratory of Disease Proteomics and Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Zhejiang University, Hangzhou, 310027, Zhejiang, China
| | - Jianzhong Shou
- Department of Urology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Changyang District, Beijing, 100021, Beijing, China
| | - Yubo Fan
- School of Biological Science and Medical Engineering, Beihang University, Haidian District, Beijing, 100191, Beijing, China
| | - Yan Xu
- School of Biological Science and Medical Engineering, Beihang University, Haidian District, Beijing, 100191, Beijing, China.
| |
Collapse
|
14
|
Tan C, Tian L, Wu C, Li K. Rapid identification of medicinal plants via visual feature-based deep learning. PLANT METHODS 2024; 20:81. [PMID: 38822406 PMCID: PMC11140858 DOI: 10.1186/s13007-024-01202-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 05/03/2024] [Indexed: 06/03/2024]
Abstract
BACKGROUND Traditional Chinese Medicinal Plants (CMPs) hold a significant and core status for the healthcare system and cultural heritage in China. It has been practiced and refined with a history of exceeding thousands of years for health-protective affection and clinical treatment in China. It plays an indispensable role in the traditional health landscape and modern medical care. It is important to accurately identify CMPs for avoiding the affected clinical safety and medication efficacy by the different processed conditions and cultivation environment confusion. RESULTS In this study, we utilize a self-developed device to obtain high-resolution data. Furthermore, we constructed a visual multi-varieties CMPs image dataset. Firstly, a random local data enhancement preprocessing method is proposed to enrich the feature representation for imbalanced data by random cropping and random shadowing. Then, a novel hybrid supervised pre-training network is proposed to expand the integration of global features within Masked Autoencoders (MAE) by incorporating a parallel classification branch. It can effectively enhance the feature capture capabilities by integrating global features and local details. Besides, the newly designed losses are proposed to strengthen the training efficiency and improve the learning capacity, based on reconstruction loss and classification loss. CONCLUSIONS Extensive experiments are performed on our dataset as well as the public dataset. Experimental results demonstrate that our method achieves the best performance among the state-of-the-art methods, highlighting the advantages of efficient implementation of plant technology and having good prospects for real-world applications.
Collapse
Affiliation(s)
- Chaoqun Tan
- College of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, 611137, China
| | - Long Tian
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E1 4NS, UK.
| | - Chunjie Wu
- Innovative Institute of Chinese Medicine and Pharmacy/Academy for Interdiscipline, Chengdu Univesity of Traditional Chinese Medicine, Chengdu, China
| | - Ke Li
- National Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu, 610065, China.
| |
Collapse
|
15
|
Fiorin A, López Pablo C, Lejeune M, Hamza Siraj A, Della Mea V. Enhancing AI Research for Breast Cancer: A Comprehensive Review of Tumor-Infiltrating Lymphocyte Datasets. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01043-8. [PMID: 38806950 DOI: 10.1007/s10278-024-01043-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 01/19/2024] [Accepted: 02/07/2024] [Indexed: 05/30/2024]
Abstract
The field of immunology is fundamental to our understanding of the intricate dynamics of the tumor microenvironment. In particular, tumor-infiltrating lymphocyte (TIL) assessment emerges as essential aspect in breast cancer cases. To gain comprehensive insights, the quantification of TILs through computer-assisted pathology (CAP) tools has become a prominent approach, employing advanced artificial intelligence models based on deep learning techniques. The successful recognition of TILs requires the models to be trained, a process that demands access to annotated datasets. Unfortunately, this task is hampered not only by the scarcity of such datasets, but also by the time-consuming nature of the annotation phase required to create them. Our review endeavors to examine publicly accessible datasets pertaining to the TIL domain and thereby become a valuable resource for the TIL community. The overall aim of the present review is thus to make it easier to train and validate current and upcoming CAP tools for TIL assessment by inspecting and evaluating existing publicly available online datasets.
Collapse
Affiliation(s)
- Alessio Fiorin
- Oncological Pathology and Bioinformatics Research Group, Institut d'Investigació Sanitària Pere Virgili (IISPV), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Pathology, Hospital de Tortosa Verge de la Cinta (HTVC), Institut Català de la Salut (ICS), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili (URV), Tarragona, Spain.
| | - Carlos López Pablo
- Oncological Pathology and Bioinformatics Research Group, Institut d'Investigació Sanitària Pere Virgili (IISPV), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Pathology, Hospital de Tortosa Verge de la Cinta (HTVC), Institut Català de la Salut (ICS), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili (URV), Tarragona, Spain.
| | - Marylène Lejeune
- Oncological Pathology and Bioinformatics Research Group, Institut d'Investigació Sanitària Pere Virgili (IISPV), C/Esplanetes no 14, 43500, Tortosa, Spain
- Department of Pathology, Hospital de Tortosa Verge de la Cinta (HTVC), Institut Català de la Salut (ICS), C/Esplanetes no 14, 43500, Tortosa, Spain
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili (URV), Tarragona, Spain
| | - Ameer Hamza Siraj
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Vincenzo Della Mea
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| |
Collapse
|
16
|
Yilmaz F, Brickman A, Najdawi F, Yakirevich E, Egger R, Resnick MB. Advancing Artificial Intelligence Integration Into the Pathology Workflow: Exploring Opportunities in Gastrointestinal Tract Biopsies. J Transl Med 2024; 104:102043. [PMID: 38431118 DOI: 10.1016/j.labinv.2024.102043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/14/2024] [Accepted: 02/26/2024] [Indexed: 03/05/2024] Open
Abstract
This review aims to present a comprehensive overview of the current landscape of artificial intelligence (AI) applications in the analysis of tubular gastrointestinal biopsies. These publications cover a spectrum of conditions, ranging from inflammatory ailments to malignancies. Moving beyond the conventional diagnosis based on hematoxylin and eosin-stained whole-slide images, the review explores additional implications of AI, including its involvement in interpreting immunohistochemical results, molecular subtyping, and the identification of cellular spatial biomarkers. Furthermore, the review examines how AI can contribute to enhancing the quality and control of diagnostic processes, introducing new workflow options, and addressing the limitations and caveats associated with current AI platforms in this context.
Collapse
Affiliation(s)
- Fazilet Yilmaz
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | - Arlen Brickman
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | - Fedaa Najdawi
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | - Evgeny Yakirevich
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | | | - Murray B Resnick
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island.
| |
Collapse
|
17
|
Pérez-Cano J, Sansano Valero I, Anglada-Rotger D, Pina O, Salembier P, Marques F. Combining graph neural networks and computer vision methods for cell nuclei classification in lung tissue. Heliyon 2024; 10:e28463. [PMID: 38590866 PMCID: PMC10999915 DOI: 10.1016/j.heliyon.2024.e28463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 03/19/2024] [Indexed: 04/10/2024] Open
Abstract
The detection of tumoural cells from whole slide images is an essential task in medical diagnosis and research. In this article, we propose and analyse a novel approach that combines computer vision-based models with graph neural networks to improve the accuracy of automated tumoural cell detection in lung tissue. Our proposal leverages the inherent structure and relationships between cells in the tissue. Experimental results on our own curated dataset show that modelling the problem with graphs gives the model a clear advantage over just working at pixel level. This change in perspective provides extra information that makes it possible to improve the performance. The reduction of dimensionality that comes from working with the graph also allows us to increase the field of view with low computational requirements. Code is available at https://github.com/Jerry-Master/lung-tumour-study, models are uploaded to https://huggingface.co/Jerry-Master/Hovernet-plus-Graphs, and the dataset is published on Zenodo https://zenodo.org/doi/10.5281/zenodo.8368122.
Collapse
Affiliation(s)
- Jose Pérez-Cano
- Department of Signal Theory and Communications, Universitat Politècnica de Catalunya, Barcelona, Spain
| | | | - David Anglada-Rotger
- Department of Signal Theory and Communications, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Oscar Pina
- Department of Signal Theory and Communications, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Philippe Salembier
- Department of Signal Theory and Communications, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Ferran Marques
- Department of Signal Theory and Communications, Universitat Politècnica de Catalunya, Barcelona, Spain
| |
Collapse
|
18
|
Luna M, Chikontwe P, Park SH. Enhanced Nuclei Segmentation and Classification via Category Descriptors in the SAM Model. Bioengineering (Basel) 2024; 11:294. [PMID: 38534568 DOI: 10.3390/bioengineering11030294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 03/13/2024] [Accepted: 03/19/2024] [Indexed: 03/28/2024] Open
Abstract
Segmenting and classifying nuclei in H&E histopathology images is often limited by the long-tailed distribution of nuclei types. However, the strong generalization ability of image segmentation foundation models like the Segment Anything Model (SAM) can help improve the detection quality of rare types of nuclei. In this work, we introduce category descriptors to perform nuclei segmentation and classification by prompting the SAM model. We close the domain gap between histopathology and natural scene images by aligning features in low-level space while preserving the high-level representations of SAM. We performed extensive experiments on the Lizard dataset, validating the ability of our model to perform automatic nuclei segmentation and classification, especially for rare nuclei types, where achieved a significant detection improvement in the F1 score of up to 12%. Our model also maintains compatibility with manual point prompts for interactive refinement during inference without requiring any additional training.
Collapse
Affiliation(s)
- Miguel Luna
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| | - Philip Chikontwe
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| | - Sang Hyun Park
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
- AI Graduate School, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| |
Collapse
|
19
|
Mahbod A, Polak C, Feldmann K, Khan R, Gelles K, Dorffner G, Woitek R, Hatamikia S, Ellinger I. NuInsSeg: A fully annotated dataset for nuclei instance segmentation in H&E-stained histological images. Sci Data 2024; 11:295. [PMID: 38486039 PMCID: PMC10940572 DOI: 10.1038/s41597-024-03117-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 03/04/2024] [Indexed: 03/18/2024] Open
Abstract
In computational pathology, automatic nuclei instance segmentation plays an essential role in whole slide image analysis. While many computerized approaches have been proposed for this task, supervised deep learning (DL) methods have shown superior segmentation performances compared to classical machine learning and image processing techniques. However, these models need fully annotated datasets for training which is challenging to acquire, especially in the medical domain. In this work, we release one of the biggest fully manually annotated datasets of nuclei in Hematoxylin and Eosin (H&E)-stained histological images, called NuInsSeg. This dataset contains 665 image patches with more than 30,000 manually segmented nuclei from 31 human and mouse organs. Moreover, for the first time, we provide additional ambiguous area masks for the entire dataset. These vague areas represent the parts of the images where precise and deterministic manual annotations are impossible, even for human experts. The dataset and detailed step-by-step instructions to generate related segmentation masks are publicly available on the respective repositories.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria.
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria.
| | - Christine Polak
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Katharina Feldmann
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Rumsha Khan
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Katharina Gelles
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, 1090, Austria
| | - Ramona Woitek
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, 2700, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| |
Collapse
|
20
|
Luna M, Chikontwe P, Nam S, Park SH. Attention guided multi-scale cluster refinement with extended field of view for amodal nuclei segmentation. Comput Biol Med 2024; 170:108015. [PMID: 38266467 DOI: 10.1016/j.compbiomed.2024.108015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/04/2024] [Accepted: 01/19/2024] [Indexed: 01/26/2024]
Abstract
Nuclei segmentation plays a crucial role in disease understanding and diagnosis. In whole slide images, cell nuclei often appear overlapping and densely packed with ambiguous boundaries due to the underlying 3D structure of histopathology samples. Instance segmentation via deep neural networks with object clustering is able to detect individual segments in crowded nuclei but suffers from a limited field of view, and does not support amodal segmentation. In this work, we introduce a dense feature pyramid network with a feature mixing module to increase the field of view of the segmentation model while keeping pixel-level details. We also improve the model output quality by adding a multi-scale self-attention guided refinement module that sequentially adjusts predictions as resolution increases. Finally, we enable clusters to share pixels by separating the instance clustering objective function from other pixel-related tasks, and introduce supervision to occluded areas to guide the learning process. For evaluation of amodal nuclear segmentation, we also update prior metrics used in common modal segmentation to allow the evaluation of overlapping masks and mitigate over-penalization issues via a novel unique matching algorithm. Our experiments demonstrate consistent performance across multiple datasets with significantly improved segmentation quality.
Collapse
Affiliation(s)
- Miguel Luna
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea
| | - Philip Chikontwe
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea
| | - Siwoo Nam
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea
| | - Sang Hyun Park
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea; AI Graduate School, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea.
| |
Collapse
|
21
|
Aung TN, Bates KM, Rimm DL. High-Plex Assessment of Biomarkers in Tumors. Mod Pathol 2024; 37:100425. [PMID: 38219953 DOI: 10.1016/j.modpat.2024.100425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 01/02/2024] [Accepted: 01/08/2024] [Indexed: 01/16/2024]
Abstract
The assessment of biomarkers plays a critical role in the diagnosis and treatment of many cancers. Biomarkers not only provide diagnostic, prognostic, or predictive information but also can act as effective targets for new pharmaceutical therapies. As the utility of biomarkers increases, it becomes more important to utilize accurate and efficient methods for biomarker discovery and, ultimately, clinical assessment. High-plex imaging studies, defined here as assessment of 8 or more biomarkers on a single slide, have become the method of choice for biomarker discovery and assessment of biomarker spatial context. In this review, we discuss methods of measuring biomarkers in slide-mounted tissue samples, detail the various high-plex methods that allow for the simultaneous assessment of multiple biomarkers in situ, and describe the impact of high-plex biomarker assessment on the future of anatomic pathology.
Collapse
Affiliation(s)
- Thazin N Aung
- Department of Pathology, Yale University School of Medicine, New Haven, Connecticut
| | - Katherine M Bates
- Department of Pathology, Yale University School of Medicine, New Haven, Connecticut
| | - David L Rimm
- Department of Pathology, Yale University School of Medicine, New Haven, Connecticut; Department of Internal Medicine (Medical Oncology), Yale University School of Medicine, New Haven, Connecticut.
| |
Collapse
|
22
|
Graham S, Vu QD, Jahanifar M, Weigert M, Schmidt U, Zhang W, Zhang J, Yang S, Xiang J, Wang X, Rumberger JL, Baumann E, Hirsch P, Liu L, Hong C, Aviles-Rivero AI, Jain A, Ahn H, Hong Y, Azzuni H, Xu M, Yaqub M, Blache MC, Piégu B, Vernay B, Scherr T, Böhland M, Löffler K, Li J, Ying W, Wang C, Snead D, Raza SEA, Minhas F, Rajpoot NM. CoNIC Challenge: Pushing the frontiers of nuclear detection, segmentation, classification and counting. Med Image Anal 2024; 92:103047. [PMID: 38157647 DOI: 10.1016/j.media.2023.103047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 09/19/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024]
Abstract
Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.
Collapse
Affiliation(s)
- Simon Graham
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom.
| | - Quoc Dang Vu
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Martin Weigert
- Institute of Bioengineering, School of Life Sciences, EPFL, Lausanne, Switzerland
| | | | - Wenhua Zhang
- The Department of Computer Science, The University of Hong Kong, Hong Kong
| | | | - Sen Yang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Jinxi Xiang
- Department of Precision Instruments, Tsinghua University, Beijing, China
| | - Xiyue Wang
- College of Computer Science, Sichuan University, Chengdu, China
| | - Josef Lorenz Rumberger
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany; Humboldt University of Berlin, Faculty of Mathematics and Natural Sciences, Berlin, Germany; Charité University Medicine, Berlin, Germany
| | | | - Peter Hirsch
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany; Humboldt University of Berlin, Faculty of Mathematics and Natural Sciences, Berlin, Germany
| | - Lihao Liu
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom
| | - Chenyang Hong
- Department of Computer Science and Engineering, Chinese University of Hong Kong, Hong Kong
| | - Angelica I Aviles-Rivero
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom
| | - Ayushi Jain
- Softsensor.ai, Bridgewater, NJ, United States of America; PRR.ai, TX, United States of America
| | - Heeyoung Ahn
- Department of R&D Center, Arontier Co. Ltd, Seoul, Republic of Korea
| | - Yiyu Hong
- Department of R&D Center, Arontier Co. Ltd, Seoul, Republic of Korea
| | - Hussam Azzuni
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Min Xu
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Mohammad Yaqub
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | | | - Benoît Piégu
- CNRS, IFCE, INRAE, Université de Tours, PRC, 3780, Nouzilly, France
| | - Bertrand Vernay
- Institut de Génétique et de Biologie Moléculaire et Cellulaire, Illkirch, France; Centre National de la Recherche Scientifique, UMR7104, Illkirch, France; Institut National de la Santé et de la Recherche Médicale, INSERM, U1258, Illkirch, France; Université de Strasbourg, Strasbourg, France
| | - Tim Scherr
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Moritz Böhland
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Katharina Löffler
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Jiachen Li
- School of software engineering, South China University of Technology, Guangzhou, China
| | - Weiqin Ying
- School of software engineering, South China University of Technology, Guangzhou, China
| | - Chixin Wang
- School of software engineering, South China University of Technology, Guangzhou, China
| | - David Snead
- Histofy Ltd, Birmingham, United Kingdom; Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, United Kingdom; Division of Biomedical Sciences, Warwick Medical School, University of Warwick, Coventry, United Kingdom
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom; Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, United Kingdom
| |
Collapse
|
23
|
Karageorgos GM, Cho S, McDonough E, Chadwick C, Ghose S, Owens J, Jung KJ, Machiraju R, West R, Brooks JD, Mallick P, Ginty F. Deep learning-based automated pipeline for blood vessel detection and distribution analysis in multiplexed prostate cancer images. FRONTIERS IN BIOINFORMATICS 2024; 3:1296667. [PMID: 38323039 PMCID: PMC10844485 DOI: 10.3389/fbinf.2023.1296667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 12/18/2023] [Indexed: 02/08/2024] Open
Abstract
Introduction: Prostate cancer is a highly heterogeneous disease, presenting varying levels of aggressiveness and response to treatment. Angiogenesis is one of the hallmarks of cancer, providing oxygen and nutrient supply to tumors. Micro vessel density has previously been correlated with higher Gleason score and poor prognosis. Manual segmentation of blood vessels (BVs) In microscopy images is challenging, time consuming and may be prone to inter-rater variabilities. In this study, an automated pipeline is presented for BV detection and distribution analysis in multiplexed prostate cancer images. Methods: A deep learning model was trained to segment BVs by combining CD31, CD34 and collagen IV images. In addition, the trained model was used to analyze the size and distribution patterns of BVs in relation to disease progression in a cohort of prostate cancer patients (N = 215). Results: The model was capable of accurately detecting and segmenting BVs, as compared to ground truth annotations provided by two reviewers. The precision (P), recall (R) and dice similarity coefficient (DSC) were equal to 0.93 (SD 0.04), 0.97 (SD 0.02) and 0.71 (SD 0.07) with respect to reviewer 1, and 0.95 (SD 0.05), 0.94 (SD 0.07) and 0.70 (SD 0.08) with respect to reviewer 2, respectively. BV count was significantly associated with 5-year recurrence (adjusted p = 0.0042), while both count and area of blood vessel were significantly associated with Gleason grade (adjusted p = 0.032 and 0.003 respectively). Discussion: The proposed methodology is anticipated to streamline and standardize BV analysis, offering additional insights into the biology of prostate cancer, with broad applicability to other cancers.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Kyeong Joo Jung
- Department of Computer Science and Engineering, The Ohio State University, Columbus, OH, United States
| | - Raghu Machiraju
- Department of Computer Science and Engineering, The Ohio State University, Columbus, OH, United States
| | - Robert West
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, United States
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, United States
| | - Parag Mallick
- Canary Center for Cancer Early Detection, Department of Radiology, Stanford University School of Medicine, Stanford, CA, United States
| | | |
Collapse
|
24
|
Pereira M, Pinto J, Arteaga B, Guerra A, Jorge RN, Monteiro FJ, Salgado CL. A Comprehensive Look at In Vitro Angiogenesis Image Analysis Software. Int J Mol Sci 2023; 24:17625. [PMID: 38139453 PMCID: PMC10743557 DOI: 10.3390/ijms242417625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 12/11/2023] [Accepted: 12/14/2023] [Indexed: 12/24/2023] Open
Abstract
One of the complex challenges faced presently by tissue engineering (TE) is the development of vascularized constructs that accurately mimic the extracellular matrix (ECM) of native tissue in which they are inserted to promote vessel growth and, consequently, wound healing and tissue regeneration. TE technique is characterized by several stages, starting from the choice of cell culture and the more appropriate scaffold material that can adequately support and supply them with the necessary biological cues for microvessel development. The next step is to analyze the attained microvasculature, which is reliant on the available labeling and microscopy techniques to visualize the network, as well as metrics employed to characterize it. These are usually attained with the use of software, which has been cited in several works, although no clear standard procedure has been observed to promote the reproduction of the cell response analysis. The present review analyzes not only the various steps previously described in terms of the current standards for evaluation, but also surveys some of the available metrics and software used to quantify networks, along with the detection of analysis limitations and future improvements that could lead to considerable progress for angiogenesis evaluation and application in TE research.
Collapse
Affiliation(s)
- Mariana Pereira
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal; (M.P.); (J.P.); (B.A.); (F.J.M.)
- INEB—Instituto de Engenharia Biomédica, Universidade do Porto, 4200-135 Porto, Portugal
| | - Jéssica Pinto
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal; (M.P.); (J.P.); (B.A.); (F.J.M.)
- INEB—Instituto de Engenharia Biomédica, Universidade do Porto, 4200-135 Porto, Portugal
| | - Belén Arteaga
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal; (M.P.); (J.P.); (B.A.); (F.J.M.)
- INEB—Instituto de Engenharia Biomédica, Universidade do Porto, 4200-135 Porto, Portugal
- Faculty of Medicine, University of Granada, Parque Tecnológico de la Salud, Av. de la Investigación 11, 18016 Granada, Spain
| | - Ana Guerra
- INEGI—Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, 4200-465 Porto, Portugal; (A.G.); (R.N.J.)
| | - Renato Natal Jorge
- INEGI—Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, 4200-465 Porto, Portugal; (A.G.); (R.N.J.)
- LAETA—Laboratório Associado de Energia, Transportes e Aeronáutica, Universidade do Porto, 4200-165 Porto, Portugal
- FEUP—Faculdade de Engenharia, Departamento de Engenharia Metalúrgica e de Materiais, Universidade do Porto, 4200-165 Porto, Portugal
| | - Fernando Jorge Monteiro
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal; (M.P.); (J.P.); (B.A.); (F.J.M.)
- INEB—Instituto de Engenharia Biomédica, Universidade do Porto, 4200-135 Porto, Portugal
- FEUP—Faculdade de Engenharia, Departamento de Engenharia Metalúrgica e de Materiais, Universidade do Porto, 4200-165 Porto, Portugal
- PCCC—Porto Comprehensive Cancer Center, 4200-072 Porto, Portugal
| | - Christiane Laranjo Salgado
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal; (M.P.); (J.P.); (B.A.); (F.J.M.)
- INEB—Instituto de Engenharia Biomédica, Universidade do Porto, 4200-135 Porto, Portugal
| |
Collapse
|
25
|
Panconi L, Tansell A, Collins AJ, Makarova M, Owen DM. Three-dimensional topology-based analysis segments volumetric and spatiotemporal fluorescence microscopy. BIOLOGICAL IMAGING 2023; 4:e1. [PMID: 38516632 PMCID: PMC10951800 DOI: 10.1017/s2633903x23000260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 11/13/2023] [Accepted: 12/01/2023] [Indexed: 03/23/2024]
Abstract
Image analysis techniques provide objective and reproducible statistics for interpreting microscopy data. At higher dimensions, three-dimensional (3D) volumetric and spatiotemporal data highlight additional properties and behaviors beyond the static 2D focal plane. However, increased dimensionality carries increased complexity, and existing techniques for general segmentation of 3D data are either primitive, or highly specialized to specific biological structures. Borrowing from the principles of 2D topological data analysis (TDA), we formulate a 3D segmentation algorithm that implements persistent homology to identify variations in image intensity. From this, we derive two separate variants applicable to spatial and spatiotemporal data, respectively. We demonstrate that this analysis yields both sensitive and specific results on simulated data and can distinguish prominent biological structures in fluorescence microscopy images, regardless of their shape. Furthermore, we highlight the efficacy of temporal TDA in tracking cell lineage and the frequency of cell and organelle replication.
Collapse
Affiliation(s)
- Luca Panconi
- Institute of Immunology and Immunotherapy, University of Birmingham, Birmingham, UK
- College of Engineering and Physical Sciences, University of Birmingham, Birmingham, UK
- Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
| | - Amy Tansell
- College of Engineering and Physical Sciences, University of Birmingham, Birmingham, UK
- School of Mathematics, University of Birmingham, Birmingham, UK
| | | | - Maria Makarova
- School of Biosciences, College of Life and Environmental Science, University of Birmingham, Birmingham, UK
- Institute of Metabolism and Systems Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Dylan M. Owen
- Institute of Immunology and Immunotherapy, University of Birmingham, Birmingham, UK
- Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
- School of Mathematics, University of Birmingham, Birmingham, UK
| |
Collapse
|
26
|
Gonzàlez-Farré M, Gibert J, Santiago-Díaz P, Santos J, García P, Massó J, Bellosillo B, Lloveras B, Albanell J, Vázquez I, Comerma L. Automated quantification of stromal tumour infiltrating lymphocytes is associated with prognosis in breast cancer. Virchows Arch 2023; 483:655-663. [PMID: 37500796 DOI: 10.1007/s00428-023-03608-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 07/10/2023] [Accepted: 07/17/2023] [Indexed: 07/29/2023]
Abstract
Stromal tumour infiltrating lymphocytes (sTIL) in haematoxylin and eosin (H&E) stained sections has been linked to better outcomes and better responses to neoadjuvant therapy in triple-negative and HER2-positive breast cancer (TNBC and HER2 +). However, the infiltrate includes different cell populations that have specific roles in the tumour immune microenvironment. Various studies have found high concordance between sTIL visual quantification and computational assessment, but specific data on the individual prognostic impact of plasma cells or lymphocytes within sTIL on patient prognosis is still unknown. In this study, we validated a deep-learning breast cancer sTIL scoring model (smsTIL) based on the segmentation of tumour cells, benign ductal cells, lymphocytes, plasma cells, necrosis, and 'other' cells in whole slide images (WSI). Focusing on HER2 + and TNBC patient samples, we assessed the concordance between sTIL visual scoring and the smsTIL in 130 WSI. Furthermore, we analysed 175 WSI to correlate smsTIL with clinical data and patient outcomes. We found a high correlation between sTIL values scored visually and semi-automatically (R = 0.76; P = 2.2e-16). Patients with higher smsTIL had better overall survival (OS) in TNBC (P = 0.0021). In the TNBC cohort, smsTIL was as an independent prognostic factor for OS. As part of this work, we introduce a new segmentation dataset of H&E-stained WSI.
Collapse
Affiliation(s)
- Mònica Gonzàlez-Farré
- Department of Pathology, Hospital del Mar, Passeig Marítim de la Barceloneta 25-29, 08003, Barcelona, Spain.
- Cancer Research Program, IMIM (Hospital del Mar Medical Research Institute), 08003, Barcelona, Spain.
| | - Joan Gibert
- Department of Pathology, Hospital del Mar, Passeig Marítim de la Barceloneta 25-29, 08003, Barcelona, Spain
- Cancer Research Program, IMIM (Hospital del Mar Medical Research Institute), 08003, Barcelona, Spain
| | - Pablo Santiago-Díaz
- Department of Pathology, Hospital del Mar, Passeig Marítim de la Barceloneta 25-29, 08003, Barcelona, Spain
| | - Jordina Santos
- Department of Pathology, Hospital del Mar, Passeig Marítim de la Barceloneta 25-29, 08003, Barcelona, Spain
| | - Pilar García
- Department of Pathology, Hospital del Mar, Passeig Marítim de la Barceloneta 25-29, 08003, Barcelona, Spain
| | - Jordi Massó
- Department of Pathology, Hospital del Mar, Passeig Marítim de la Barceloneta 25-29, 08003, Barcelona, Spain
| | - Beatriz Bellosillo
- Department of Pathology, Hospital del Mar, Passeig Marítim de la Barceloneta 25-29, 08003, Barcelona, Spain
- Cancer Research Program, IMIM (Hospital del Mar Medical Research Institute), 08003, Barcelona, Spain
- Department of Medicine and Life Sciences (MELIS), University Pompeu Fabra, Doctor Aiguader 88, 08003, Barcelona, Spain
| | - Belén Lloveras
- Department of Pathology, Hospital del Mar, Passeig Marítim de la Barceloneta 25-29, 08003, Barcelona, Spain
- Cancer Research Program, IMIM (Hospital del Mar Medical Research Institute), 08003, Barcelona, Spain
- Department of Medicine and Life Sciences (MELIS), University Pompeu Fabra, Doctor Aiguader 88, 08003, Barcelona, Spain
| | - Joan Albanell
- Cancer Research Program, IMIM (Hospital del Mar Medical Research Institute), 08003, Barcelona, Spain
- Department of Medicine and Life Sciences (MELIS), University Pompeu Fabra, Doctor Aiguader 88, 08003, Barcelona, Spain
- Department of Medical Oncology, Hospital del Mar, 08003, Barcelona, Spain
- Center for Biomedical Network Research On Cancer (CIBERONC), 28029, Madrid, Spain
| | - Ivonne Vázquez
- Department of Pathology, Hospital del Mar, Passeig Marítim de la Barceloneta 25-29, 08003, Barcelona, Spain
| | - Laura Comerma
- Department of Pathology, Hospital del Mar, Passeig Marítim de la Barceloneta 25-29, 08003, Barcelona, Spain
- Cancer Research Program, IMIM (Hospital del Mar Medical Research Institute), 08003, Barcelona, Spain
| |
Collapse
|
27
|
Lin Y, Qu Z, Chen H, Gao Z, Li Y, Xia L, Ma K, Zheng Y, Cheng KT. Nuclei segmentation with point annotations from pathology images via self-supervised learning and co-training. Med Image Anal 2023; 89:102933. [PMID: 37611532 DOI: 10.1016/j.media.2023.102933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 07/21/2023] [Accepted: 08/10/2023] [Indexed: 08/25/2023]
Abstract
Nuclei segmentation is a crucial task for whole slide image analysis in digital pathology. Generally, the segmentation performance of fully-supervised learning heavily depends on the amount and quality of the annotated data. However, it is time-consuming and expensive for professional pathologists to provide accurate pixel-level ground truth, while it is much easier to get coarse labels such as point annotations. In this paper, we propose a weakly-supervised learning method for nuclei segmentation that only requires point annotations for training. First, coarse pixel-level labels are derived from the point annotations based on the Voronoi diagram and the k-means clustering method to avoid overfitting. Second, a co-training strategy with an exponential moving average method is designed to refine the incomplete supervision of the coarse labels. Third, a self-supervised visual representation learning method is tailored for nuclei segmentation of pathology images that transforms the hematoxylin component images into the H&E stained images to gain better understanding of the relationship between the nuclei and cytoplasm. We comprehensively evaluate the proposed method using two public datasets. Both visual and quantitative results demonstrate the superiority of our method to the state-of-the-art methods, and its competitive performance compared to the fully-supervised methods. Codes are available at https://github.com/hust-linyi/SC-Net.
Collapse
Affiliation(s)
- Yi Lin
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong
| | - Zhiyong Qu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong; Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong.
| | - Zhongke Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | | | - Lili Xia
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Kai Ma
- Tencent Jarvis Lab, Shenzhen, China
| | | | - Kwang-Ting Cheng
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong
| |
Collapse
|
28
|
Abstract
Multiplex imaging has emerged as an invaluable tool for immune-oncologists and translational researchers, enabling them to examine intricate interactions among immune cells, stroma, matrix, and malignant cells within the tumor microenvironment (TME). It holds significant promise in the quest to discover improved biomarkers for treatment stratification and identify novel therapeutic targets. Nonetheless, several challenges exist in the realms of study design, experiment optimization, and data analysis. In this review, our aim is to present an overview of the utilization of multiplex imaging in immuno-oncology studies and inform novice researchers about the fundamental principles at each stage of the imaging and analysis process.
Collapse
Affiliation(s)
- Chen Zhao
- Thoracic and GI Malignancies Branch, CCR, NCI, Bethesda, Maryland, USA
- Lymphocyte Biology Section, Laboratory of Immune System Biology, NIAID, Bethesda, Maryland, USA
| | - Ronald N Germain
- Lymphocyte Biology Section, Laboratory of Immune System Biology, NIAID, Bethesda, Maryland, USA
| |
Collapse
|
29
|
Timakova A, Ananev V, Fayzullin A, Makarov V, Ivanova E, Shekhter A, Timashev P. Artificial Intelligence Assists in the Detection of Blood Vessels in Whole Slide Images: Practical Benefits for Oncological Pathology. Biomolecules 2023; 13:1327. [PMID: 37759727 PMCID: PMC10526383 DOI: 10.3390/biom13091327] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/23/2023] [Accepted: 08/28/2023] [Indexed: 09/29/2023] Open
Abstract
The analysis of the microvasculature and the assessment of angiogenesis have significant prognostic value in various diseases, including cancer. The search for invasion into the blood and lymphatic vessels and the assessment of angiogenesis are important aspects of oncological diagnosis. These features determine the prognosis and aggressiveness of the tumor. Traditional manual evaluation methods are time consuming and subject to inter-observer variability. Blood vessel detection is a perfect task for artificial intelligence, which is capable of rapid analyzing thousands of tissue structures in whole slide images. The development of computer vision solutions requires the segmentation of tissue regions, the extraction of features and the training of machine learning models. In this review, we focus on the methodologies employed by researchers to identify blood vessels and vascular invasion across a range of tumor localizations, including breast, lung, colon, brain, renal, pancreatic, gastric and oral cavity cancers. Contemporary models herald a new era of computational pathology in morphological diagnostics.
Collapse
Affiliation(s)
- Anna Timakova
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya St., 119991 Moscow, Russia; (A.T.); (A.F.); (E.I.); (P.T.)
| | - Vladislav Ananev
- Medical Informatics Laboratory, Yaroslav-the-Wise Novgorod State University, 41 B. St. Petersburgskaya, 173003 Veliky Novgorod, Russia; (V.A.); (V.M.)
| | - Alexey Fayzullin
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya St., 119991 Moscow, Russia; (A.T.); (A.F.); (E.I.); (P.T.)
| | - Vladimir Makarov
- Medical Informatics Laboratory, Yaroslav-the-Wise Novgorod State University, 41 B. St. Petersburgskaya, 173003 Veliky Novgorod, Russia; (V.A.); (V.M.)
| | - Elena Ivanova
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya St., 119991 Moscow, Russia; (A.T.); (A.F.); (E.I.); (P.T.)
- B.V. Petrovsky Russian Research Center of Surgery, 2 Abrikosovskiy Lane, 119991 Moscow, Russia
| | - Anatoly Shekhter
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya St., 119991 Moscow, Russia; (A.T.); (A.F.); (E.I.); (P.T.)
| | - Peter Timashev
- Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya St., 119991 Moscow, Russia; (A.T.); (A.F.); (E.I.); (P.T.)
- World-Class Research Center “Digital Biodesign and Personalized Healthcare”, Sechenov First Moscow State Medical University (Sechenov University), 8-2 Trubetskaya St., 119991 Moscow, Russia
| |
Collapse
|
30
|
Iqbal S, Qureshi AN, Alhussein M, Aurangzeb K, Kadry S. A Novel Heteromorphous Convolutional Neural Network for Automated Assessment of Tumors in Colon and Lung Histopathology Images. Biomimetics (Basel) 2023; 8:370. [PMID: 37622975 PMCID: PMC10452605 DOI: 10.3390/biomimetics8040370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 07/31/2023] [Accepted: 08/03/2023] [Indexed: 08/26/2023] Open
Abstract
The automated assessment of tumors in medical image analysis encounters challenges due to the resemblance of colon and lung tumors to non-mitotic nuclei and their heteromorphic characteristics. An accurate assessment of tumor nuclei presence is crucial for determining tumor aggressiveness and grading. This paper proposes a new method called ColonNet, a heteromorphous convolutional neural network (CNN) with a feature grafting methodology categorically configured for analyzing mitotic nuclei in colon and lung histopathology images. The ColonNet model consists of two stages: first, identifying potential mitotic patches within the histopathological imaging areas, and second, categorizing these patches into squamous cell carcinomas, adenocarcinomas (lung), benign (lung), benign (colon), and adenocarcinomas (colon) based on the model's guidelines. We develop and employ our deep CNNs, each capturing distinct structural, textural, and morphological properties of tumor nuclei, to construct the heteromorphous deep CNN. The execution of the proposed ColonNet model is analyzed by its comparison with state-of-the-art CNNs. The results demonstrate that our model surpasses others on the test set, achieving an impressive F1 score of 0.96, sensitivity and specificity of 0.95, and an area under the accuracy curve of 0.95. These outcomes underscore our hybrid model's superior performance, excellent generalization, and accuracy, highlighting its potential as a valuable tool to support pathologists in diagnostic activities.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore 54000, Pakistan;
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore 54000, Pakistan;
| | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia; (M.A.); (K.A.)
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia; (M.A.); (K.A.)
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway;
| |
Collapse
|
31
|
Zhang E, Xie R, Bian Y, Wang J, Tao P, Zhang H, Jiang S. Cervical cell nuclei segmentation based on GC-UNet. Heliyon 2023; 9:e17647. [PMID: 37456010 PMCID: PMC10345258 DOI: 10.1016/j.heliyon.2023.e17647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 06/23/2023] [Accepted: 06/24/2023] [Indexed: 07/18/2023] Open
Abstract
Cervical cancer diagnosis hinges significantly on precise nuclei segmentation at early stages, which however, remains largely elusive due to challenges such as overlapping cells and blurred nuclei boundaries. This paper presents a novel deep neural network (DNN), the Global Context UNet (GC-UNet), designed to adeptly handle intricate environments and deliver accurate cell segmentation. At the core of GC-UNet is DenseNet, which serves as the backbone, encoding cell images and capitalizing on pre-existing knowledge. A unique context-aware pooling module, equipped with a gating model, is integrated for effective encoding of ImageNet pre-trained features, ensuring essential features at different levels are retained. Further, a decoder grounded in a global context attention block is employed to foster global feature interaction and refine the predicted masks.
Collapse
Affiliation(s)
- Enguang Zhang
- School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
- Zhuhai College of Science and Technology, Zhuhai, China
| | - Rixin Xie
- School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
| | - Yuxin Bian
- School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
| | - Jiayan Wang
- School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
| | - Pengyi Tao
- School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
| | - Heng Zhang
- Faculty of Education, The University of Hong Kong, Pokfulam Road, Hong Kong, China
| | - Shenlu Jiang
- School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
| |
Collapse
|
32
|
Islam Sumon R, Bhattacharjee S, Hwang YB, Rahman H, Kim HC, Ryu WS, Kim DM, Cho NH, Choi HK. Densely Convolutional Spatial Attention Network for nuclei segmentation of histological images for computational pathology. Front Oncol 2023; 13:1009681. [PMID: 37305563 PMCID: PMC10248729 DOI: 10.3389/fonc.2023.1009681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 05/05/2023] [Indexed: 06/13/2023] Open
Abstract
Introduction Automatic nuclear segmentation in digital microscopic tissue images can aid pathologists to extract high-quality features for nuclear morphometrics and other analyses. However, image segmentation is a challenging task in medical image processing and analysis. This study aimed to develop a deep learning-based method for nuclei segmentation of histological images for computational pathology. Methods The original U-Net model sometime has a caveat in exploring significant features. Herein, we present the Densely Convolutional Spatial Attention Network (DCSA-Net) model based on U-Net to perform the segmentation task. Furthermore, the developed model was tested on external multi-tissue dataset - MoNuSeg. To develop deep learning algorithms for well-segmenting nuclei, a large quantity of data are mandatory, which is expensive and less feasible. We collected hematoxylin and eosin-stained image data sets from two hospitals to train the model with a variety of nuclear appearances. Because of the limited number of annotated pathology images, we introduced a small publicly accessible data set of prostate cancer (PCa) with more than 16,000 labeled nuclei. Nevertheless, to construct our proposed model, we developed the DCSA module, an attention mechanism for capturing useful information from raw images. We also used several other artificial intelligence-based segmentation methods and tools to compare their results to our proposed technique. Results To prioritize the performance of nuclei segmentation, we evaluated the model's outputs based on the Accuracy, Dice coefficient (DC), and Jaccard coefficient (JC) scores. The proposed technique outperformed the other methods and achieved superior nuclei segmentation with accuracy, DC, and JC of 96.4% (95% confidence interval [CI]: 96.2 - 96.6), 81.8 (95% CI: 80.8 - 83.0), and 69.3 (95% CI: 68.2 - 70.0), respectively, on the internal test data set. Conclusion Our proposed method demonstrates superior performance in segmenting cell nuclei of histological images from internal and external datasets, and outperforms many standard segmentation algorithms used for comparative analysis.
Collapse
Affiliation(s)
- Rashadul Islam Sumon
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Subrata Bhattacharjee
- Department of Computer Engineering, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Yeong-Byn Hwang
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Hafizur Rahman
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Hee-Cheol Kim
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Wi-Sun Ryu
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| | - Dong Min Kim
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| | - Nam-Hoon Cho
- Department of Pathology, Yonsei University Hospital, Seoul, Republic of Korea
| | - Heung-Kook Choi
- Department of Computer Engineering, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| |
Collapse
|
33
|
Wang H, Xian M, Vakanski A, Shareef B. SIAN: STYLE-GUIDED INSTANCE-ADAPTIVE NORMALIZATION FOR MULTI-ORGAN HISTOPATHOLOGY IMAGE SYNTHESIS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230507. [PMID: 38572450 PMCID: PMC10989245 DOI: 10.1109/isbi53787.2023.10230507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/05/2024]
Abstract
Existing deep neural networks for histopathology image synthesis cannot generate image styles that align with different organs, and cannot produce accurate boundaries of clustered nuclei. To address these issues, we propose a style-guided instance-adaptive normalization (SIAN) approach to synthesize realistic color distributions and textures for histopathology images from different organs. SIAN contains four phases, semantization, stylization, instantiation, and modulation. The first two phases synthesize image semantics and styles by using semantic maps and learned image style vectors. The instantiation module integrates geometrical and topological information and generates accurate nuclei boundaries. We validate the proposed approach on a multiple-organ dataset, Extensive experimental results demonstrate that the proposed method generates more realistic histopathology images than four state-of-the-art approaches for five organs. By incorporating synthetic images from the proposed approach to model training, an instance segmentation network can achieve state-of-the-art performance.
Collapse
Affiliation(s)
- Haotian Wang
- Department of Computer Science, University of Idaho, USA
| | - Min Xian
- Department of Computer Science, University of Idaho, USA
| | | | - Bryar Shareef
- Department of Computer Science, University of Idaho, USA
| |
Collapse
|
34
|
Butte S, Wang H, Vakanski A, Xian M. ENHANCED SHARP-GAN FOR HISTOPATHOLOGY IMAGE SYNTHESIS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230516. [PMID: 38572451 PMCID: PMC10989243 DOI: 10.1109/isbi53787.2023.10230516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/05/2024]
Abstract
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection. However, existing methods struggle to produce realistic images that have accurate nuclei boundaries and less artifacts, which limits the application in downstream tasks. To address the challenges, we propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization. The proposed approach uses the skeleton map of nuclei to integrate nuclei topology and separate touching nuclei. In the loss function, we propose two new contour regularization terms that enhance the contrast between contour and non-contour pixels and increase the similarity between contour pixels. We evaluate the proposed approach on the two datasets using image quality metrics and a downstream task (nuclei segmentation). The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets. By integrating 6k synthetic images from the proposed approach into training, a nuclei segmentation model achieves the state-of-the-art segmentation performance on TNBC dataset and its detection quality (DQ), segmentation quality (SQ), panoptic quality (PQ), and aggregated Jaccard index (AJI) is 0.855, 0.863, 0.691, and 0.683, respectively.
Collapse
|
35
|
Zhao T, Fu C, Tian Y, Song W, Sham CW. GSN-HVNET: A Lightweight, Multi-Task Deep Learning Framework for Nuclei Segmentation and Classification. Bioengineering (Basel) 2023; 10:bioengineering10030393. [PMID: 36978784 PMCID: PMC10045412 DOI: 10.3390/bioengineering10030393] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/13/2023] [Accepted: 03/20/2023] [Indexed: 03/30/2023] Open
Abstract
Nuclei segmentation and classification are two basic and essential tasks in computer-aided diagnosis of digital pathology images, and those deep-learning-based methods have achieved significant success. Unfortunately, most of the existing studies accomplish the two tasks by splicing two related neural networks directly, resulting in repetitive computation efforts and a redundant-and-large neural network. Thus, this paper proposes a lightweight deep learning framework (GSN-HVNET) with an encoder-decoder structure for simultaneous segmentation and classification of nuclei. The decoder consists of three branches outputting the semantic segmentation of nuclei, the horizontal and vertical (HV) distances of nuclei pixels to their mass centers, and the class of each nucleus, respectively. The instance segmentation results are obtained by combing the outputs of the first and second branches. To reduce the computational cost and improve the network stability under small batch sizes, we propose two newly designed blocks, Residual-Ghost-SN (RGS) and Dense-Ghost-SN (DGS). Furthermore, considering the practical usage in pathological diagnosis, we redefine the classification principle of the CoNSeP dataset. Experimental results demonstrate that the proposed model outperforms other state-of-the-art models in terms of segmentation and classification accuracy by a significant margin while maintaining high computational efficiency.
Collapse
Affiliation(s)
- Tengfei Zhao
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
- Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, Shenyang 110819, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, China
| | - Yunjia Tian
- State Grid Liaoning Information and Communication Company, Shenyang 110006, China
| | - Wei Song
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
| | - Chiu-Wing Sham
- School of Computer Science, The University of Auckland, Auckland 1142, New Zealand
| |
Collapse
|
36
|
Basu A, Senapati P, Deb M, Rai R, Dhal KG. A survey on recent trends in deep learning for nucleus segmentation from histopathology images. EVOLVING SYSTEMS 2023; 15:1-46. [PMID: 38625364 PMCID: PMC9987406 DOI: 10.1007/s12530-023-09491-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023]
Abstract
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017-2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Collapse
Affiliation(s)
- Anusua Basu
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Pradip Senapati
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Mainak Deb
- Wipro Technologies, Pune, Maharashtra India
| | - Rebika Rai
- Department of Computer Applications, Sikkim University, Sikkim, India
| | - Krishna Gopal Dhal
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| |
Collapse
|
37
|
Guo R, Xie K, Pagnucco M, Song Y. SAC-Net: Learning with weak and noisy labels in histopathology image segmentation. Med Image Anal 2023; 86:102790. [PMID: 36878159 DOI: 10.1016/j.media.2023.102790] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 11/24/2022] [Accepted: 02/23/2023] [Indexed: 03/06/2023]
Abstract
Deep convolutional neural networks have been highly effective in segmentation tasks. However, segmentation becomes more difficult when training images include many complex instances to segment, such as the task of nuclei segmentation in histopathology images. Weakly supervised learning can reduce the need for large-scale, high-quality ground truth annotations by involving non-expert annotators or algorithms to generate supervision information for segmentation. However, there is still a significant performance gap between weakly supervised learning and fully supervised learning approaches. In this work, we propose a weakly-supervised nuclei segmentation method in a two-stage training manner that only requires annotation of the nuclear centroids. First, we generate boundary and superpixel-based masks as pseudo ground truth labels to train our SAC-Net, which is a segmentation network enhanced by a constraint network and an attention network to effectively address the problems caused by noisy labels. Then, we refine the pseudo labels at the pixel level based on Confident Learning to train the network again. Our method shows highly competitive performance of cell nuclei segmentation in histopathology images on three public datasets. Code will be available at: https://github.com/RuoyuGuo/MaskGA_Net.
Collapse
Affiliation(s)
- Ruoyu Guo
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Kunzi Xie
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Maurice Pagnucco
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Australia.
| |
Collapse
|
38
|
Zhang W, Zhang J, Wang X, Yang S, Huang J, Yang W, Wang W, Han X. Merging nucleus datasets by correlation-based cross-training. Med Image Anal 2023; 84:102705. [PMID: 36525843 DOI: 10.1016/j.media.2022.102705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 11/16/2022] [Accepted: 11/24/2022] [Indexed: 12/12/2022]
Abstract
Fine-grained nucleus classification is challenging because of the high inter-class similarity and intra-class variability. Therefore, a large number of labeled data is required for training effective nucleus classification models. However, it is challenging to label a large-scale nucleus classification dataset comparable to ImageNet in natural images, considering that high-quality nucleus labeling requires specific domain knowledge. In addition, the existing publicly available datasets are often inconsistently labeled with divergent labeling criteria. Due to this inconsistency, conventional models have to be trained on each dataset separately and work independently to infer their own classification results, limiting their classification performance. To fully utilize all annotated datasets, we formulate the nucleus classification task as a multi-label problem with missing labels to utilize all datasets in a unified framework. Specifically, we merge all datasets and combine their labels as multiple labels. Thus, each data has one ground-truth label and several missing labels. We devise a base classification module that is trained using all data but sparsely supervised by the ground-truth labels only. We then exploit the correlation among different label sets by a label correlation module. By doing so, we can have two trained basic modules and further cross-train them with both ground-truth labels and pseudo labels for the missing ones. Importantly, data without any ground-truth labels can also be involved in our framework, as we can regard them as data with all labels missing and generate the corresponding pseudo labels. We carefully re-organized multiple publicly available nucleus classification datasets, converted them into a uniform format, and tested the proposed framework on them. Experimental results show substantial improvement compared to the state-of-the-art methods. The code and data are available at https://w-h-zhang.github.io/projects/dataset_merging/dataset_merging.html.
Collapse
Affiliation(s)
- Wenhua Zhang
- Department of Computer Science, The University of Hong Kong, Hong Kong, China
| | - Jun Zhang
- Tencent AI Lab, Shen Zhen, Guang Dong, China
| | - Xiyue Wang
- Tencent AI Lab, Shen Zhen, Guang Dong, China
| | - Sen Yang
- Tencent AI Lab, Shen Zhen, Guang Dong, China
| | | | - Wei Yang
- Tencent AI Lab, Shen Zhen, Guang Dong, China
| | | | - Xiao Han
- Tencent AI Lab, Shen Zhen, Guang Dong, China.
| |
Collapse
|
39
|
Naik RR, Rajan A, Kalita N. Automated image analysis method to detect and quantify fat cell infiltration in hematoxylin and eosin stained human pancreas histology images. BBA ADVANCES 2023; 3:100084. [PMID: 37082253 PMCID: PMC10074932 DOI: 10.1016/j.bbadva.2023.100084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2023] Open
Abstract
Fatty infiltration in pancreas leading to steatosis is a major risk factor in pancreas transplantation. Hematoxylin and eosin (H and E) is one of the common histological staining techniques that provides information on the tissue cytoarchitecture. Adipose (fat) cells accumulation in pancreas has been shown to impact beta cell survival, its endocrine function and pancreatic steatosis and can cause non-alcoholic fatty pancreas disease (NAFPD). The current automated tools (E.g. Adiposoft) available for fat analysis are suited for white fat tissue which is homogeneous and easier to segment unlike heterogeneous tissues such as pancreas where fat cells continue to play critical physiopathological functions. The currently, available pancreas segmentation tool focuses on endocrine islet segmentation based on cell nuclei detection for diagnosis of pancreatic cancer. In the current study, we present a fat quantifying tool, Fatquant, which identifies fat cells in heterogeneous H and E tissue sections with reference to diameter of fat cell. Using histological images from a public database, we observed an intersection over union of 0.797 to 0.962 and 0.675 to 0.937 for manual versus Fatquant analysis of pancreas and liver, respectively.
Collapse
Affiliation(s)
- Roshan Ratnakar Naik
- Department of Biotechnology, Parvatibai Chowgule College of Arts & Science, Margao-Goa, 403601
- Corresponding author.
| | - Annie Rajan
- Department of Computer Science, Dhempe College of Arts and Science, Miramar, Panaji-Goa, 403 001
| | | |
Collapse
|
40
|
Foucart A, Debeir O, Decaestecker C. Shortcomings and areas for improvement in digital pathology image segmentation challenges. Comput Med Imaging Graph 2023; 103:102155. [PMID: 36525770 DOI: 10.1016/j.compmedimag.2022.102155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 09/13/2022] [Accepted: 11/27/2022] [Indexed: 12/13/2022]
Abstract
Digital pathology image analysis challenges have been organised regularly since 2010, often with events hosted at major conferences and results published in high-impact journals. These challenges mobilise a lot of energy from organisers, participants, and expert annotators (especially for image segmentation challenges). This study reviews image segmentation challenges in digital pathology and the top-ranked methods, with a particular focus on how reference annotations are generated and how the methods' predictions are evaluated. We found important shortcomings in the handling of inter-expert disagreement and the relevance of the evaluation process chosen. We also noted key problems with the quality control of various challenge elements that can lead to uncertainties in the published results. Our findings show the importance of greatly increasing transparency in the reporting of challenge results, and the need to make publicly available the evaluation codes, test set annotations and participants' predictions. The aim is to properly ensure the reproducibility and interpretation of the results and to increase the potential for exploitation of the substantial work done in these challenges.
Collapse
Affiliation(s)
- Adrien Foucart
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium.
| | - Olivier Debeir
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium
| | - Christine Decaestecker
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium.
| |
Collapse
|
41
|
Wang X, Yu G, Yan Z, Wan L, Wang W, Cui L. Lung Cancer Subtype Diagnosis by Fusing Image-Genomics Data and Hybrid Deep Networks. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:512-523. [PMID: 34855599 DOI: 10.1109/tcbb.2021.3132292] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Accurate diagnosis of cancer subtypes is crucial for precise treatment, because different cancer subtypes are involved with different pathology and require different therapies. Although deep learning techniques have made great success in computer vision and other fields, they do not work well on Lung cancer subtype diagnosis, due to the distinction of slide images between different cancer subtypes is ambiguous. Furthermore, they often over-fit to high-dimensional genomics data with limited samples, and do not fuse the image and genomics data in a sensible way. In this paper, we propose a hybrid deep network based approach LungDIG for Lung cancer subtype Diagnosis by fusing Image-Genomics data. LungDIG first tiles the tissue slide image into small patches and extracts the patch-level features by fine-tuning an Inception-V3 model. Since the patches may contain some false positives in non-diagnostic regions, it further designs a patch-level feature combination strategy to integrate the extracted patch features and maintain the diversity between different cancer subtypes. At the same time, it extracts the genomics features from Copy Number Variation data by an attention based nonlinear extractor. Next, it fuses the image and genomics features by an attention based multilayer perceptron (MLP) to diagnose cancer subtype. Experiments on TCGA lung cancer data show that LungDIG can not only achieve higher accuracy for cancer subtype diagnosis than state-of-the-art methods, but also have a high authenticity and good interpretability.
Collapse
|
42
|
Differential diagnosis of thyroid nodule capsules using random forest guided selection of image features. Sci Rep 2022; 12:21636. [PMID: 36517531 PMCID: PMC9751070 DOI: 10.1038/s41598-022-25788-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 12/05/2022] [Indexed: 12/15/2022] Open
Abstract
Microscopic evaluation of tissue sections stained with hematoxylin and eosin is the current gold standard for diagnosing thyroid pathology. Digital pathology is gaining momentum providing the pathologist with additional cues to traditional routes when placing a diagnosis, therefore it is extremely important to develop new image analysis methods that can extract image features with diagnostic potential. In this work, we use histogram and texture analysis to extract features from microscopic images acquired on thin thyroid nodule capsules sections and demonstrate how they enable the differential diagnosis of thyroid nodules. Targeted thyroid nodules are benign (i.e., follicular adenoma) and malignant (i.e., papillary thyroid carcinoma and its sub-type arising within a follicular adenoma). Our results show that the considered image features can enable the quantitative characterization of the collagen capsule surrounding thyroid nodules and provide an accurate classification of the latter's type using random forest.
Collapse
|
43
|
Zhang W, Zhang J, Yang S, Wang X, Yang W, Huang J, Wang W, Han X. Knowledge-Based Representation Learning for Nucleus Instance Classification From Histopathological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3939-3951. [PMID: 36037453 DOI: 10.1109/tmi.2022.3201981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The classification of nuclei in H&E-stained histopathological images is a fundamental step in the quantitative analysis of digital pathology. Most existing methods employ multi-class classification on the detected nucleus instances, while the annotation scale greatly limits their performance. Moreover, they often downplay the contextual information surrounding nucleus instances that is critical for classification. To explicitly provide contextual information to the classification model, we design a new structured input consisting of a content-rich image patch and a target instance mask. The image patch provides rich contextual information, while the target instance mask indicates the location of the instance to be classified and emphasizes its shape. Benefiting from our structured input format, we propose Structured Triplet for representation learning, a triplet learning framework on unlabelled nucleus instances with customized positive and negative sampling strategies. We pre-train a feature extraction model based on this framework with a large-scale unlabeled dataset, making it possible to train an effective classification model with limited annotated data. We also add two auxiliary branches, namely the attribute learning branch and the conventional self-supervised learning branch, to further improve its performance. As part of this work, we will release a new dataset of H&E-stained pathology images with nucleus instance masks, containing 20,187 patches of size 1024 ×1024 , where each patch comes from a different whole-slide image. The model pre-trained on this dataset with our framework significantly reduces the burden of extensive labeling. We show a substantial improvement in nucleus classification accuracy compared with the state-of-the-art methods.
Collapse
|
44
|
Chen Y, Jia Y, Zhang X, Bai J, Li X, Ma M, Sun Z, Pei Z. TSHVNet: Simultaneous Nuclear Instance Segmentation and Classification in Histopathological Images Based on Multiattention Mechanisms. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7921922. [PMID: 36457339 PMCID: PMC9708332 DOI: 10.1155/2022/7921922] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 09/30/2022] [Accepted: 10/03/2022] [Indexed: 09/27/2023]
Abstract
Accurate nuclear instance segmentation and classification in histopathologic images are the foundation of cancer diagnosis and prognosis. Several challenges are restricting the development of accurate simultaneous nuclear instance segmentation and classification. Firstly, the visual appearances of different category nuclei could be similar, making it difficult to distinguish different types of nuclei. Secondly, it is thorny to separate highly clustering nuclear instances. Thirdly, rare current studies have considered the global dependencies among diverse nuclear instances. In this article, we propose a novel deep learning framework named TSHVNet which integrates multiattention modules (i.e., Transformer and SimAM) into the state-of-the-art HoVer-Net for the sake of a more accurate nuclear instance segmentation and classification. Specifically, the Transformer attention module is employed on the trunk of the HoVer-Net to model the long-distance relationships of diverse nuclear instances. The SimAM attention modules are deployed on both the trunk and branches to apply the 3D channel and spatial attention to assign neurons with appropriate weights. Finally, we validate the proposed method on two public datasets: PanNuke and CoNSeP. The comparison results have shown the outstanding performance of the proposed TSHVNet network among the state-of-art methods. Particularly, as compared to the original HoVer-Net, the performance of nuclear instance segmentation evaluated by the PQ index has shown 1.4% and 2.8% increases on the CoNSeP and PanNuke datasets, respectively, and the performance of nuclear classification measured by F1_score has increased by 2.4% and 2.5% on the CoNSeP and PanNuke datasets, respectively. Therefore, the proposed multiattention-based TSHVNet is of great potential in simultaneous nuclear instance segmentation and classification.
Collapse
Affiliation(s)
- Yuli Chen
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Yuhang Jia
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Xinxin Zhang
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Jiayang Bai
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Xue Li
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Miao Ma
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Zengguo Sun
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Zhao Pei
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| |
Collapse
|
45
|
Kosaraju S, Park J, Lee H, Yang JW, Kang M. Deep learning-based framework for slide-based histopathological image analysis. Sci Rep 2022; 12:19075. [PMID: 36351997 PMCID: PMC9646838 DOI: 10.1038/s41598-022-23166-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 10/26/2022] [Indexed: 11/11/2022] Open
Abstract
Digital pathology coupled with advanced machine learning (e.g., deep learning) has been changing the paradigm of whole-slide histopathological images (WSIs) analysis. Major applications in digital pathology using machine learning include automatic cancer classification, survival analysis, and subtyping from pathological images. While most pathological image analyses are based on patch-wise processing due to the extremely large size of histopathology images, there are several applications that predict a single clinical outcome or perform pathological diagnosis per slide (e.g., cancer classification, survival analysis). However, current slide-based analyses are task-dependent, and a general framework of slide-based analysis in WSI has been seldom investigated. We propose a novel slide-based histopathology analysis framework that creates a WSI representation map, called HipoMap, that can be applied to any slide-based problems, coupled with convolutional neural networks. HipoMap converts a WSI of various shapes and sizes to structured image-type representation. Our proposed HipoMap outperformed existing methods in intensive experiments with various settings and datasets. HipoMap showed the Area Under the Curve (AUC) of 0.96±0.026 (5% improved) in the experiments for lung cancer classification, and c-index of 0.787±0.013 (3.5% improved) and coefficient of determination ([Formula: see text]) of 0.978±0.032 (24% improved) in survival analysis and survival prediction with TCGA lung cancer data respectively, as a general framework of slide-based analysis with a flexible capability. The results showed significant improvement comparing to the current state-of-the-art methods on each task. We further discussed experimental results of HipoMap as pathological viewpoints and verified the performance using publicly available TCGA datasets. A Python package is available at https://pypi.org/project/hipomap , and the package can be easily installed using Python PIP. The open-source codes in Python are available at: https://github.com/datax-lab/HipoMap .
Collapse
Affiliation(s)
- Sai Kosaraju
- grid.272362.00000 0001 0806 6926Department of Computer Science, University of Nevada, Las Vegas, Las Vegas, NV 89154 USA
| | - Jeongyeon Park
- grid.412859.30000 0004 0533 4202Department of Computer Science, Sun Moon University, Asan, 336708 South Korea
| | - Hyun Lee
- grid.412859.30000 0004 0533 4202Department of Computer Science, Sun Moon University, Asan, 336708 South Korea
| | - Jung Wook Yang
- grid.256681.e0000 0001 0661 1492Department of Pathology, Gyeongsang National University Hospital, Gyeongsang National University College of Medicine, Jinju, South Korea
| | - Mingon Kang
- grid.272362.00000 0001 0806 6926Department of Computer Science, University of Nevada, Las Vegas, Las Vegas, NV 89154 USA
| |
Collapse
|
46
|
Automatic pseudo-coloring approaches to improve visual perception and contrast in polarimetric images of biological tissues. Sci Rep 2022; 12:18479. [PMID: 36323771 PMCID: PMC9630374 DOI: 10.1038/s41598-022-23330-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 10/29/2022] [Indexed: 11/06/2022] Open
Abstract
Imaging polarimetry methods have proved their suitability to enhance the image contrast between tissues and structures in organic samples, or even to reveal structures hidden in regular intensity images. These methods are nowadays used in a wide range of biological applications, as for the early diagnosis of different pathologies. To include the discriminatory potential of different polarimetric observables in a single image, a suitable strategy reported in literature consists in associating different observables to different color channels, giving rise to pseudo-colored images helping the visualization of different tissues in samples. However, previous reported polarimetric based pseudo-colored images of tissues are mostly based on simple linear combinations of polarimetric observables whose weights are set ad-hoc, and thus, far from optimal approaches. In this framework, we propose the implementation of two pseudo-colored methods. One is based on the Euclidean distances of actual values of pixels and an average value taken over a given region of interest in the considered image. The second method is based on the likelihood for each pixel to belong to a given class. Such classes being defined on the basis of a statistical model that describes the statistical distribution of values of the pixels in the considered image. The methods are experimentally validated on four different biological samples, two of animal origin and two of vegetal origin. Results provide the potential of the methods to be applied in biomedical and botanical applications.
Collapse
|
47
|
Herbsthofer L, Tomberger M, Smolle MA, Prietl B, Pieber TR, López-García P. Cell2Grid: an efficient, spatial, and convolutional neural network-ready representation of cell segmentation data. J Med Imaging (Bellingham) 2022; 9:067501. [PMID: 36466076 PMCID: PMC9709305 DOI: 10.1117/1.jmi.9.6.067501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 11/03/2022] [Indexed: 12/03/2022] Open
Abstract
Purpose Cell segmentation algorithms are commonly used to analyze large histologic images as they facilitate interpretation, but on the other hand they complicate hypothesis-free spatial analysis. Therefore, many applications train convolutional neural networks (CNNs) on high-resolution images that resolve individual cells instead, but their practical application is severely limited by computational resources. In this work, we propose and investigate an alternative spatial data representation based on cell segmentation data for direct training of CNNs. Approach We introduce and analyze the properties of Cell2Grid, an algorithm that generates compact images from cell segmentation data by placing individual cells into a low-resolution grid and resolves possible cell conflicts. For evaluation, we present a case study on colorectal cancer relapse prediction using fluorescent multiplex immunohistochemistry images. Results We could generate Cell2Grid images at 5 - μ m resolution that were 100 times smaller than the original ones. Cell features, such as phenotype counts and nearest-neighbor cell distances, remain similar to those of original cell segmentation tables ( p < 0.0001 ). These images could be directly fed to a CNN for predicting colon cancer relapse. Our experiments showed that test set error rate was reduced by 25% compared with CNNs trained on images rescaled to 5 μ m with bilinear interpolation. Compared with images at 1 - μ m resolution (bilinear rescaling), our method reduced CNN training time by 85%. Conclusions Cell2Grid is an efficient spatial data representation algorithm that enables the use of conventional CNNs on cell segmentation data. Its cell-based representation additionally opens a door for simplified model interpretation and synthetic image generation.
Collapse
Affiliation(s)
- Laurin Herbsthofer
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
- BioTechMed, Graz, Austria
| | - Martina Tomberger
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
| | - Maria A. Smolle
- Medical University of Graz, Department of Orthopaedics and Trauma, Graz, Austria
| | - Barbara Prietl
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
- BioTechMed, Graz, Austria
- Medical University of Graz, Division of Endocrinology and Diabetology, Graz, Austria
| | - Thomas R. Pieber
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
- BioTechMed, Graz, Austria
- Medical University of Graz, Division of Endocrinology and Diabetology, Graz, Austria
- Health Institute for Biomedicine and Health Sciences, Joanneum Research Forschungsgesellschaft mbH, Graz, Austria
| | | |
Collapse
|
48
|
Deshmukh G, Susladkar O, Makwana D, Chandra Teja R S, Kumar S N, Mittal S. FEEDNet: a feature enhanced encoder-decoder LSTM network for nuclei instance segmentation for histopathological diagnosis. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac8594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 07/29/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Automated cell nuclei segmentation is vital for the histopathological diagnosis of cancer. However, nuclei segmentation from ‘hematoxylin and eosin’ (HE) stained ‘whole slide images’ (WSIs) remains a challenge due to noise-induced intensity variations and uneven staining. The goal of this paper is to propose a novel deep learning model for accurately segmenting the nuclei in HE-stained WSIs. Approach. We introduce FEEDNet, a novel encoder-decoder network that uses LSTM units and ‘feature enhancement blocks’ (FE-blocks). Our proposed FE-block avoids the loss of location information incurred by pooling layers by concatenating the downsampled version of the original image to preserve pixel intensities. FEEDNet uses an LSTM unit to capture multi-channel representations compactly. Secondly, for datasets that provide class information, we train a multiclass segmentation model, which generates masks corresponding to each class at the output. Using this information, we generate more accurate binary masks than that generated by conventional binary segmentation models. Main results. We have thoroughly evaluated FEEDNet on CoNSeP, Kumar, and CPM-17 datasets. FEEDNet achieves the best value of PQ (panoptic quality) on CoNSeP and CPM-17 datasets and the second best value of PQ on the Kumar dataset. The 32-bit floating-point version of FEEDNet has a model size of 64.90 MB. With INT8 quantization, the model size reduces to only 16.51 MB, with a negligible loss in predictive performance on Kumar and CPM-17 datasets and a minor loss on the CoNSeP dataset. Significance. Our proposed idea of generalized class-aware binary segmentation is shown to be accurate on a variety of datasets. FEEDNet has a smaller model size than the previous nuclei segmentation networks, which makes it suitable for execution on memory-constrained edge devices. The state-of-the-art predictive performance of FEEDNet makes it the most preferred network. The source code can be obtained from https://github.com/CandleLabAI/FEEDNet.
Collapse
|
49
|
Martin PCN, Kim H, Lövkvist C, Hong B, Won KJ. Vesalius: high-resolution in silico anatomization of spatial transcriptomic data using image analysis. Mol Syst Biol 2022; 18:e11080. [PMID: 36065846 PMCID: PMC9446088 DOI: 10.15252/msb.202211080] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 08/18/2022] [Accepted: 08/19/2022] [Indexed: 11/25/2022] Open
Abstract
Characterization of tissue architecture promises to deliver insights into development, cell communication, and disease. In silico spatial domain retrieval methods have been developed for spatial transcriptomics (ST) data assuming transcriptional similarity of neighboring barcodes. However, domain retrieval approaches with this assumption cannot work in complex tissues composed of multiple cell types. This task becomes especially challenging in cellular resolution ST methods. We developed Vesalius to decipher tissue anatomy from ST data by applying image processing technology. Vesalius uniquely detected territories composed of multiple cell types and successfully recovered tissue structures in high-resolution ST data including in mouse brain, embryo, liver, and colon. Utilizing this tissue architecture, Vesalius identified tissue morphology-specific gene expression and regional specific gene expression changes for astrocytes, interneuron, oligodendrocytes, and entorhinal cells in the mouse brain.
Collapse
Affiliation(s)
- Patrick C N Martin
- Department of Computational BiomedicineCedars‐Sinai Medical CenterHollywoodCAUSA
- Biotech Research and Innovation Centre (BRIC)University of CopenhagenCopenhagenDenmark
| | - Hyobin Kim
- Department of Computational BiomedicineCedars‐Sinai Medical CenterHollywoodCAUSA
- Biotech Research and Innovation Centre (BRIC)University of CopenhagenCopenhagenDenmark
| | - Cecilia Lövkvist
- Biotech Research and Innovation Centre (BRIC)University of CopenhagenCopenhagenDenmark
| | - Byung‐Woo Hong
- Computer Science DepartmentChung‐Ang UniversitySeoulKorea
| | - Kyoung Jae Won
- Department of Computational BiomedicineCedars‐Sinai Medical CenterHollywoodCAUSA
- Biotech Research and Innovation Centre (BRIC)University of CopenhagenCopenhagenDenmark
| |
Collapse
|
50
|
Liu H, Kurc T. Deep learning for survival analysis in breast cancer with whole slide image data. Bioinformatics 2022; 38:3629-3637. [PMID: 35674341 PMCID: PMC9272797 DOI: 10.1093/bioinformatics/btac381] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 04/22/2022] [Accepted: 06/04/2022] [Indexed: 02/07/2023] Open
Abstract
MOTIVATION Whole slide tissue images contain detailed data on the sub-cellular structure of cancer. Quantitative analyses of this data can lead to novel biomarkers for better cancer diagnosis and prognosis and can improve our understanding of cancer mechanisms. Such analyses are challenging to execute because of the sizes and complexity of whole slide image data and relatively limited volume of training data for machine learning methods. RESULTS We propose and experimentally evaluate a multi-resolution deep learning method for breast cancer survival analysis. The proposed method integrates image data at multiple resolutions and tumor, lymphocyte and nuclear segmentation results from deep learning models. Our results show that this approach can significantly improve the deep learning model performance compared to using only the original image data. The proposed approach achieves a c-index value of 0.706 compared to a c-index value of 0.551 from an approach that uses only color image data at the highest image resolution. Furthermore, when clinical features (sex, age and cancer stage) are combined with image data, the proposed approach achieves a c-index of 0.773. AVAILABILITY AND IMPLEMENTATION https://github.com/SBU-BMI/deep_survival_analysis.
Collapse
Affiliation(s)
- Huidong Liu
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|