1
|
Bertram CA, Donovan TA, Bartel A. Mitotic activity: A systematic literature review of the assessment methodology and prognostic value in feline tumors. Vet Pathol 2024; 61:743-751. [PMID: 38533803 PMCID: PMC11370206 DOI: 10.1177/03009858241239566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Increased proliferation is a driver of tumorigenesis, and quantification of mitotic activity is a standard task for prognostication. This systematic review is an analysis of all available references on mitotic activity in feline tumors to provide an overview of the assessment methods and prognostic value. A systematic literature search in PubMed and Scopus and a nonsystematic search in Google Scholar were conducted. All articles on feline tumors that correlated mitotic activity with patient outcome were identified. Data analysis revealed that of the 42 eligible articles, mitotic count (MC, mitotic figures/tumor area) was evaluated in 39 studies, and mitotic index (MI, mitotic figures/tumor cells) in 3 studies. The risk of bias was considered high for most studies (26/42, 62%) based on small study populations, insufficient details of the MC/MI methods, and lack of statistical measures for diagnostic accuracy or effect on outcome. The MC/MI methods varied between studies. A significant association of MC with survival was determined in 20 of 28 (71%) studies (10 studies evaluated other outcome metrics or provided individual patient data), while 1 study found an inverse effect. Three tumor types had at least 4 studies, and a prognostic association with survival was found in 5 of 6 studies on mast cell tumors, 5 of 5 on mammary tumors, and 3 of 4 on soft-tissue sarcomas. MI was shown to correlate with survival for mammary tumors by 2 research groups; however, comparisons to MC were not conducted. Further studies with standardized mitotic activity methods and appropriate statistical analysis for discriminant ability of patient outcome are needed to infer the prognostic value of MC and MI.
Collapse
|
2
|
Sheikh TS, Cho M. Segmentation of Variants of Nuclei on Whole Slide Images by Using Radiomic Features. Bioengineering (Basel) 2024; 11:252. [PMID: 38534526 DOI: 10.3390/bioengineering11030252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 02/10/2024] [Accepted: 02/26/2024] [Indexed: 03/28/2024] Open
Abstract
The histopathological segmentation of nuclear types is a challenging task because nuclei exhibit distinct morphologies, textures, and staining characteristics. Accurate segmentation is critical because it affects the diagnostic workflow for patient assessment. In this study, a framework was proposed for segmenting various types of nuclei from different organs of the body. The proposed framework improved the segmentation performance for each nuclear type using radiomics. First, we used distinct radiomic features to extract and analyze quantitative information about each type of nucleus and subsequently trained various classifiers based on the best input sub-features of each radiomic feature selected by a LASSO operator. Second, we inputted the outputs of the best classifier to various segmentation models to learn the variants of nuclei. Using the MoNuSAC2020 dataset, we achieved state-of-the-art segmentation performance for each category of nuclei type despite the complexity, overlapping, and obscure regions. The generalized adaptability of the proposed framework was verified by the consistent performance obtained in whole slide images of different organs of the body and radiomic features.
Collapse
Affiliation(s)
- Taimoor Shakeel Sheikh
- AIMI-Artificial Intelligence and Medical Imaging Laboratory, Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Republic of Korea
| | - Migyung Cho
- AIMI-Artificial Intelligence and Medical Imaging Laboratory, Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Republic of Korea
| |
Collapse
|
3
|
Jan M, Spangaro A, Lenartowicz M, Mattiazzi Usaj M. From pixels to insights: Machine learning and deep learning for bioimage analysis. Bioessays 2024; 46:e2300114. [PMID: 38058114 DOI: 10.1002/bies.202300114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 10/25/2023] [Accepted: 11/13/2023] [Indexed: 12/08/2023]
Abstract
Bioimage analysis plays a critical role in extracting information from biological images, enabling deeper insights into cellular structures and processes. The integration of machine learning and deep learning techniques has revolutionized the field, enabling the automated, reproducible, and accurate analysis of biological images. Here, we provide an overview of the history and principles of machine learning and deep learning in the context of bioimage analysis. We discuss the essential steps of the bioimage analysis workflow, emphasizing how machine learning and deep learning have improved preprocessing, segmentation, feature extraction, object tracking, and classification. We provide examples that showcase the application of machine learning and deep learning in bioimage analysis. We examine user-friendly software and tools that enable biologists to leverage these techniques without extensive computational expertise. This review is a resource for researchers seeking to incorporate machine learning and deep learning in their bioimage analysis workflows and enhance their research in this rapidly evolving field.
Collapse
Affiliation(s)
- Mahta Jan
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Allie Spangaro
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Michelle Lenartowicz
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Mojca Mattiazzi Usaj
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
4
|
Priya C V L, V G B, B R V, Ramachandran S. Deep learning approaches for breast cancer detection in histopathology images: A review. Cancer Biomark 2024; 40:1-25. [PMID: 38517775 PMCID: PMC11191493 DOI: 10.3233/cbm-230251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
Abstract
BACKGROUND Breast cancer is one of the leading causes of death in women worldwide. Histopathology analysis of breast tissue is an essential tool for diagnosing and staging breast cancer. In recent years, there has been a significant increase in research exploring the use of deep-learning approaches for breast cancer detection from histopathology images. OBJECTIVE To provide an overview of the current state-of-the-art technologies in automated breast cancer detection in histopathology images using deep learning techniques. METHODS This review focuses on the use of deep learning algorithms for the detection and classification of breast cancer from histopathology images. We provide an overview of publicly available histopathology image datasets for breast cancer detection. We also highlight the strengths and weaknesses of these architectures and their performance on different histopathology image datasets. Finally, we discuss the challenges associated with using deep learning techniques for breast cancer detection, including the need for large and diverse datasets and the interpretability of deep learning models. RESULTS Deep learning techniques have shown great promise in accurately detecting and classifying breast cancer from histopathology images. Although the accuracy levels vary depending on the specific data set, image pre-processing techniques, and deep learning architecture used, these results highlight the potential of deep learning algorithms in improving the accuracy and efficiency of breast cancer detection from histopathology images. CONCLUSION This review has presented a thorough account of the current state-of-the-art techniques for detecting breast cancer using histopathology images. The integration of machine learning and deep learning algorithms has demonstrated promising results in accurately identifying breast cancer from histopathology images. The insights gathered from this review can act as a valuable reference for researchers in this field who are developing diagnostic strategies using histopathology images. Overall, the objective of this review is to spark interest among scholars in this complex field and acquaint them with cutting-edge technologies in breast cancer detection using histopathology images.
Collapse
Affiliation(s)
- Lakshmi Priya C V
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Biju V G
- Department of Electronics and Communication Engineering, College of Engineering Munnar, Kerala, India
| | - Vinod B R
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Sivakumar Ramachandran
- Department of Electronics and Communication Engineering, Government Engineering College Wayanad, Kerala, India
| |
Collapse
|
5
|
Xiao X, Kong Y, Li R, Wang Z, Lu H. Transformer with convolution and graph-node co-embedding: An accurate and interpretable vision backbone for predicting gene expressions from local histopathological image. Med Image Anal 2024; 91:103040. [PMID: 38007979 DOI: 10.1016/j.media.2023.103040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 11/04/2023] [Accepted: 11/17/2023] [Indexed: 11/28/2023]
Abstract
Inferring gene expressions from histopathological images has long been a fascinating yet challenging task, primarily due to the substantial disparities between the two modality. Existing strategies using local or global features of histological images are suffering model complexity, GPU consumption, low interpretability, insufficient encoding of local features, and over-smooth prediction of gene expressions among neighboring sites. In this paper, we develop TCGN (Transformer with Convolution and Graph-Node co-embedding method) for gene expression estimation from H&E-stained pathological slide images. TCGN comprises a combination of convolutional layers, transformer encoders, and graph neural networks, and is the first to integrate these blocks in a general and interpretable computer vision backbone. Notably, TCGN uniquely operates with just a single spot image as input for histopathological image analysis, simplifying the process while maintaining interpretability. We validate TCGN on three publicly available spatial transcriptomic datasets. TCGN consistently exhibited the best performance (with median PCC 0.232). TCGN offers superior accuracy while keeping parameters to a minimum (just 86.241 million), and it consumes minimal memory, allowing it to run smoothly even on personal computers. Moreover, TCGN can be extended to handle bulk RNA-seq data while providing the interpretability. Enhancing the accuracy of omics information prediction from pathological images not only establishes a connection between genotype and phenotype, enabling the prediction of costly-to-measure biomarkers from affordable histopathological images, but also lays the groundwork for future multi-modal data modeling. Our results confirm that TCGN is a powerful tool for inferring gene expressions from histopathological images in precision health applications.
Collapse
Affiliation(s)
- Xiao Xiao
- State Key Laboratory of Microbial Metabolism, Joint International Research Laboratory of Metabolic and Developmental Sciences, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China; SJTU-Yale Joint Center for Biostatistics and Data Science, National Center for Translational Medicine, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China; Department of Biostatistics, Yale School of Public Health, Yale University, New Haven, CT, United States
| | - Yan Kong
- State Key Laboratory of Microbial Metabolism, Joint International Research Laboratory of Metabolic and Developmental Sciences, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China; SJTU-Yale Joint Center for Biostatistics and Data Science, National Center for Translational Medicine, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Ronghan Li
- SJTU-Yale Joint Center for Biostatistics and Data Science, National Center for Translational Medicine, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China; Zhiyuan College, Shanghai Jiao Tong University, Shanghai, China
| | - Zuoheng Wang
- Department of Biostatistics, Yale School of Public Health, Yale University, New Haven, CT, United States
| | - Hui Lu
- State Key Laboratory of Microbial Metabolism, Joint International Research Laboratory of Metabolic and Developmental Sciences, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China; SJTU-Yale Joint Center for Biostatistics and Data Science, National Center for Translational Medicine, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China; NHC Key Laboratory of Medical Embryogenesis and Developmental Molecular Biology & Shanghai Key Laboratory of Embryo and Reproduction Engineering, Shanghai Engineering Research Center for Big Data in Pediatric Precision Medicine, Shanghai, China.
| |
Collapse
|
6
|
Hoque MZ, Keskinarkaus A, Nyberg P, Xu H, Seppänen T. Invasion depth estimation of carcinoma cells using adaptive stain normalization to improve epidermis segmentation accuracy. Comput Med Imaging Graph 2023; 108:102276. [PMID: 37611486 DOI: 10.1016/j.compmedimag.2023.102276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 07/25/2023] [Accepted: 07/26/2023] [Indexed: 08/25/2023]
Abstract
Submucosal invasion depth is a significant prognostic factor when assessing lymph node metastasis and cancer itself to plan proper treatment for the patient. Conventionally, oncologists measure the invasion depth by hand which is a laborious, subjective, and time-consuming process. The manual pathological examination by measuring accurate carcinoma cell invasion with considerable inter-observer and intra-observer variations is still challenging. The increasing use of medical imaging and artificial intelligence reveals a significant role in clinical medicine and pathology. In this paper, we propose an approach to study invasive behavior and measure the invasion depth of carcinoma from stained histopathology images. Specifically, our model includes adaptive stain normalization, color decomposition, and morphological reconstruction with adaptive thresholding to separate the epithelium with blue ratio image. Our method splits the image into multiple non-overlapping meaningful segments and successfully finds the homogeneous segments to measure accurate invasion depth. The invasion depths are measured from the inner epithelium edge to outermost pixels of the deepest part of particles in image. We conduct our experiments on skin melanoma tissue samples as well as on organotypic invasion model utilizing myoma tissue and oral squamous cell carcinoma. The performance is experimentally compared to three closely related reference methods and our method provides a superior result in measuring invasion depth. This computational technique will be beneficial for the segmentation of epithelium and other particles for the development of novel computer-aided diagnostic tools in biobank applications.
Collapse
Affiliation(s)
- Md Ziaul Hoque
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland; Division of Nephrology and Intelligent Critical Care, Department of Medicine, University of Florida, Gainesville, USA.
| | - Anja Keskinarkaus
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Pia Nyberg
- Biobank Borealis of Northern Finland, Oulu University Hospital, Finland; Translational Medicine Research Unit, Medical Research Center Oulu, Faculty of Medicine, University of Oulu, Finland
| | - Hongming Xu
- Department of Electrical and Computer Engineering, University of Alberta, Canada; School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Tapio Seppänen
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| |
Collapse
|
7
|
Das N, Saha S, Nasipuri M, Basu S, Chakraborti T. Deep-Fuzz: A synergistic integration of deep learning and fuzzy water flows for fine-grained nuclei segmentation in digital pathology. PLoS One 2023; 18:e0286862. [PMID: 37352172 PMCID: PMC10289330 DOI: 10.1371/journal.pone.0286862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 05/24/2023] [Indexed: 06/25/2023] Open
Abstract
Robust semantic segmentation of tumour micro-environment is one of the major open challenges in machine learning enabled computational pathology. Though deep learning based systems have made significant progress, their task agnostic data driven approach often lacks the contextual grounding necessary in biomedical applications. We present a novel fuzzy water flow scheme that takes the coarse segmentation output of a base deep learning framework to then provide a more fine-grained and instance level robust segmentation output. Our two stage synergistic segmentation method, Deep-Fuzz, works especially well for overlapping objects, and achieves state-of-the-art performance in four public cell nuclei segmentation datasets. We also show through visual examples how our final output is better aligned with pathological insights, and thus more clinically interpretable.
Collapse
Affiliation(s)
- Nirmal Das
- Deapartemnt of Computer Science and Engineering (AIML), Institute of Engineering and Management, Kolkata, West Bengal, India
- Deapartment of Computer Science and Engineering, Jadavpur University, Kolkata, West Bengal, India
| | - Satadal Saha
- Department of Electrical and Computer Engineering, MCKV Institute of Engineering, Howrah, West Bengal, India
| | - Mita Nasipuri
- Deapartment of Computer Science and Engineering, Jadavpur University, Kolkata, West Bengal, India
| | - Subhadip Basu
- Deapartment of Computer Science and Engineering, Jadavpur University, Kolkata, West Bengal, India
| | - Tapabrata Chakraborti
- University College London and The Alan Turing Institute, London, United Kingdom
- Linacre College, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
8
|
Sáenz-Gamboa JJ, Domenech J, Alonso-Manjarrés A, Gómez JA, de la Iglesia-Vayá M. Automatic semantic segmentation of the lumbar spine: Clinical applicability in a multi-parametric and multi-center study on magnetic resonance images. Artif Intell Med 2023; 140:102559. [PMID: 37210154 DOI: 10.1016/j.artmed.2023.102559] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 04/14/2023] [Accepted: 04/18/2023] [Indexed: 05/22/2023]
Abstract
Significant difficulties in medical image segmentation include the high variability of images caused by their origin (multi-center), the acquisition protocols (multi-parametric), the variability of human anatomy, illness severity, the effect of age and gender, and notable other factors. This work addresses problems associated with the automatic semantic segmentation of lumbar spine magnetic resonance images using convolutional neural networks. We aimed to assign a class label to each pixel of an image, with classes defined by radiologists corresponding to structural elements such as vertebrae, intervertebral discs, nerves, blood vessels, and other tissues. The proposed network topologies represent variants of the U-Net architecture, and we used several complementary blocks to define the variants: three types of convolutional blocks, spatial attention models, deep supervision, and multilevel feature extractor. Here, we describe the topologies and analyze the results of the neural network designs that obtained the most accurate segmentation. Several proposed designs outperform the standard U-Net used as a baseline, primarily when used in ensembles, where the outputs of multiple neural networks are combined according to different strategies.
Collapse
Affiliation(s)
- Jhon Jairo Sáenz-Gamboa
- FISABIO-CIPF Joint Research Unit in Biomedical Imaging, Fundaciò per al Foment de la Investigaciò Sanitària i Biomèdica (FISABIO), Av. de Catalunya 21, 46020 València, Spain.
| | - Julio Domenech
- Orthopedic Surgery Department, Hospital Arnau de Vilanova, Carrer de San Clemente s/n, 46015, València, Spain
| | - Antonio Alonso-Manjarrés
- Radiology Department, Hospital Arnau de Vilanova, Carrer de San Clemente s/n, 46015, València, Spain
| | - Jon A Gómez
- Pattern Recognition and Human Language Technology research center, Universitat Politècnica de València, Camí de Vera, s/n, 46022, València, Spain
| | - Maria de la Iglesia-Vayá
- FISABIO-CIPF Joint Research Unit in Biomedical Imaging, Fundaciò per al Foment de la Investigaciò Sanitària i Biomèdica (FISABIO), Av. de Catalunya 21, 46020 València, Spain; Regional ministry of Universal Health and Public Health in Valencia, Carrer de Misser Mascó 31, 46010 València, Spain.
| |
Collapse
|
9
|
An imbalance-aware nuclei segmentation methodology for H&E stained histopathology images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2023]
|
10
|
Saxena P, Goyal A. Accurate demarcation of a biased nucleus from H&E-stained follicular lymphoma tissues samples. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2192550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
|
11
|
Basu A, Senapati P, Deb M, Rai R, Dhal KG. A survey on recent trends in deep learning for nucleus segmentation from histopathology images. EVOLVING SYSTEMS 2023; 15:1-46. [PMID: 38625364 PMCID: PMC9987406 DOI: 10.1007/s12530-023-09491-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023]
Abstract
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017-2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Collapse
Affiliation(s)
- Anusua Basu
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Pradip Senapati
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Mainak Deb
- Wipro Technologies, Pune, Maharashtra India
| | - Rebika Rai
- Department of Computer Applications, Sikkim University, Sikkim, India
| | - Krishna Gopal Dhal
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| |
Collapse
|
12
|
Verdicchio M, Brancato V, Cavaliere C, Isgrò F, Salvatore M, Aiello M. A pathomic approach for tumor-infiltrating lymphocytes classification on breast cancer digital pathology images. Heliyon 2023; 9:e14371. [PMID: 36950640 PMCID: PMC10025040 DOI: 10.1016/j.heliyon.2023.e14371] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/03/2023] [Accepted: 03/03/2023] [Indexed: 03/11/2023] Open
Abstract
Background and objectives The detection of tumor-infiltrating lymphocytes (TILs) could aid in the development of objective measures of the infiltration grade and can support decision-making in breast cancer (BC). However, manual quantification of TILs in BC histopathological whole slide images (WSI) is currently based on a visual assessment, thus resulting not standardized, not reproducible, and time-consuming for pathologists. In this work, a novel pathomic approach, aimed to apply high-throughput image feature extraction techniques to analyze the microscopic patterns in WSI, is proposed. In fact, pathomic features provide additional information concerning the underlying biological processes compared to the WSI visual interpretation, thus providing more easily interpretable and explainable results than the most frequently investigated Deep Learning based methods in the literature. Methods A dataset containing 1037 regions of interest with tissue compartments and TILs annotated on 195 TNBC and HER2+ BC hematoxylin and eosin (H&E)-stained WSI was used. After segmenting nuclei within tumor-associated stroma using a watershed-based approach, 71 pathomic features were extracted from each nucleus and reduced using a Spearman's correlation filter followed by a nonparametric Wilcoxon rank-sum test and least absolute shrinkage and selection operator. The relevant features were used to classify each candidate nucleus as either TILs or non-TILs using 5 multivariable machine learning classification models trained using 5-fold cross-validation (1) without resampling, (2) with the synthetic minority over-sampling technique and (3) with downsampling. The prediction performance of the models was assessed using ROC curves. Results 21 features were selected, with most of them related to the well-known TILs properties of having regular shape, clearer margins, high peak intensity, more homogeneous enhancement and different textural pattern than other cells. The best performance was obtained by Random-Forest with ROC AUC of 0.86, regardless of resampling technique. Conclusions The presented approach holds promise for the classification of TILs in BC H&E-stained WSI and could provide support to pathologists for a reliable, rapid and interpretable clinical assessment of TILs in BC.
Collapse
Affiliation(s)
| | - Valentina Brancato
- IRCCS SYNLAB SDN, Via E. Gianturco 113, Naples, 80143, Italy
- Corresponding author.
| | - Carlo Cavaliere
- IRCCS SYNLAB SDN, Via E. Gianturco 113, Naples, 80143, Italy
| | - Francesco Isgrò
- Department of Electrical Engineering and Information Technologies, University of Naples Federico II, Claudio 21, Naples, 80125, Italy
| | - Marco Salvatore
- IRCCS SYNLAB SDN, Via E. Gianturco 113, Naples, 80143, Italy
| | - Marco Aiello
- IRCCS SYNLAB SDN, Via E. Gianturco 113, Naples, 80143, Italy
| |
Collapse
|
13
|
Mahmud BU, Hong GY, Mamun AA, Ping EP, Wu Q. Deep Learning-Based Segmentation of 3D Volumetric Image and Microstructural Analysis. SENSORS (BASEL, SWITZERLAND) 2023; 23:2640. [PMID: 36904845 PMCID: PMC10007404 DOI: 10.3390/s23052640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 02/16/2023] [Accepted: 02/18/2023] [Indexed: 06/18/2023]
Abstract
As a fundamental but difficult topic in computer vision, 3D object segmentation has various applications in medical image analysis, autonomous vehicles, robotics, virtual reality, lithium battery image analysis, etc. In the past, 3D segmentation was performed using hand-made features and design techniques, but these techniques could not generalize to vast amounts of data or reach acceptable accuracy. Deep learning techniques have lately emerged as the preferred method for 3D segmentation jobs as a result of their extraordinary performance in 2D computer vision. Our proposed method used a CNN-based architecture called 3D UNET, which is inspired by the famous 2D UNET that has been used to segment volumetric image data. To see the internal changes of composite materials, for instance, in a lithium battery image, it is necessary to see the flow of different materials and follow the directions analyzing the inside properties. In this paper, a combination of 3D UNET and VGG19 has been used to conduct a multiclass segmentation of publicly available sandstone datasets to analyze their microstructures using image data based on four different objects in the samples of volumetric data. In our image sample, there are a total of 448 2D images, which are then aggregated as one 3D volume to examine the 3D volumetric data. The solution involves the segmentation of each object in the volume data and further analysis of each object to find its average size, area percentage, total area, etc. The open-source image processing package IMAGEJ is used for further analysis of individual particles. In this study, it was demonstrated that convolutional neural networks can be trained to recognize sandstone microstructure traits with an accuracy of 96.78% and an IOU of 91.12%. According to our knowledge, many prior works have applied 3D UNET for segmentation, but very few papers extend it further to show the details of particles in the sample. The proposed solution offers a computational insight for real-time implementation and is discovered to be superior to the current state-of-the-art methods. The result has importance for the creation of an approximately similar model for the microstructural analysis of volumetric data.
Collapse
Affiliation(s)
- Bahar Uddin Mahmud
- Department of Computer Science, Western Michigan University, Kalamazoo, MI 49008, USA
| | - Guan Yue Hong
- Department of Computer Science, Western Michigan University, Kalamazoo, MI 49008, USA
| | - Abdullah Al Mamun
- Faculty Engineering and Technology, Multimedia University, Melaka 75450, Malaysia
| | - Em Poh Ping
- Faculty Engineering and Technology, Multimedia University, Melaka 75450, Malaysia
| | - Qingliu Wu
- Department of Chemical and Paper Engineering, Western Michigan University, Kalamazoo, MI 49008, USA
| |
Collapse
|
14
|
Wang Z, Zhou X, Gui Y, Liu M, Lu H. Multiple measurement analysis of resting-state fMRI for ADHD classification in adolescent brain from the ABCD study. Transl Psychiatry 2023; 13:45. [PMID: 36746929 PMCID: PMC9902465 DOI: 10.1038/s41398-023-02309-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 12/22/2022] [Accepted: 01/06/2023] [Indexed: 02/08/2023] Open
Abstract
Attention deficit hyperactivity disorder (ADHD) is one of the most common psychiatric disorders in school-aged children. Its accurate diagnosis looks after patients' interests well with effective treatment, which is important to them and their family. Resting-state functional magnetic resonance imaging (rsfMRI) has been widely used to characterize the abnormal brain function by computing the voxel-wise measures and Pearson's correlation (PC)-based functional connectivity (FC) for ADHD diagnosis. However, exploring the powerful measures of rsfMRI to improve ADHD diagnosis remains a particular challenge. To this end, this paper proposes an automated ADHD classification framework by fusion of multiple measures of rsfMRI in adolescent brain. First, we extract the voxel-wise measures and ROI-wise time series from the brain regions of rsfMRI after preprocessing. Then, to extract the multiple functional connectivities, we compute the PC-derived FCs including the topographical information-based high-order FC (tHOFC) and dynamics-based high-order FC (dHOFC), the sparse representation (SR)-derived FCs including the group SR (GSR), the strength and similarity guided GSR (SSGSR), and sparse low-rank (SLR). Finally, these measures are combined with multiple kernel learning (MKL) model for ADHD classification. The proposed method is applied to the Adolescent Brain and Cognitive Development (ABCD) dataset. The results show that the FCs of dHOFC and SLR perform better than the others. Fusing multiple measures achieves the best classification performance (AUC = 0.740, accuracy = 0.6916), superior to those from the single measure and the previous studies. We have identified the most discriminative FCs and brain regions for ADHD diagnosis, which are consistent with those of published literature.
Collapse
Affiliation(s)
- Zhaobin Wang
- grid.16821.3c0000 0004 0368 8293State Key Lab of Microbial Metabolism, Joint International Research Laboratory of Metabolic Developmental Sciences, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China ,grid.16821.3c0000 0004 0368 8293SJTU-Yale Joint Center of Biostatistics and Data Science, National Center for Translational Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaocheng Zhou
- grid.16821.3c0000 0004 0368 8293State Key Lab of Microbial Metabolism, Joint International Research Laboratory of Metabolic Developmental Sciences, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Yuanyuan Gui
- grid.16821.3c0000 0004 0368 8293State Key Lab of Microbial Metabolism, Joint International Research Laboratory of Metabolic Developmental Sciences, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China ,grid.16821.3c0000 0004 0368 8293SJTU-Yale Joint Center of Biostatistics and Data Science, National Center for Translational Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Manhua Liu
- MoE Key Laboratory of Artificial Intelligence, AI Institute, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Hui Lu
- State Key Lab of Microbial Metabolism, Joint International Research Laboratory of Metabolic Developmental Sciences, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China. .,SJTU-Yale Joint Center of Biostatistics and Data Science, National Center for Translational Medicine, Shanghai Jiao Tong University, Shanghai, China. .,Shanghai Engineering Research Center for Big Data in Pediatric Precision Medicine, Center for Biomedical Informatics, Shanghai Children's Hospital, Shanghai, China.
| |
Collapse
|
15
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
16
|
Ilyas T, Mannan ZI, Khan A, Azam S, Kim H, De Boer F. TSFD-Net: Tissue specific feature distillation network for nuclei segmentation and classification. Neural Netw 2022; 151:1-15. [DOI: 10.1016/j.neunet.2022.02.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 12/26/2021] [Accepted: 02/23/2022] [Indexed: 10/18/2022]
|
17
|
Ali H, Haq IU, Cui L, Feng J. MSAL-Net: improve accurate segmentation of nuclei in histopathology images by multiscale attention learning network. BMC Med Inform Decis Mak 2022; 22:90. [PMID: 35379228 PMCID: PMC8978355 DOI: 10.1186/s12911-022-01826-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 03/24/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The digital pathology images obtain the essential information about the patient's disease, and the automated nuclei segmentation results can help doctors make better decisions about diagnosing the disease. With the speedy advancement of convolutional neural networks in image processing, deep learning has been shown to play a significant role in the various analysis of medical images, such as nuclei segmentation, mitosis detection and segmentation etc. Recently, several U-net based methods have been developed to solve the automated nuclei segmentation problems. However, these methods fail to deal with the weak features representation from the initial layers and introduce the noise into the decoder path. In this paper, we propose a multiscale attention learning network (MSAL-Net), where the dense dilated convolutions block captures more comprehensive nuclei context information, and a newly modified decoder part is introduced, which integrates with efficient channel attention and boundary refinement modules to effectively learn spatial information for better prediction and further refine the nuclei cell of boundaries. RESULTS Both qualitative and quantitative results are obtained on the publicly available MoNuseg dataset. Extensive experiment results verify that our proposed method significantly outperforms state-of-the-art methods as well as the vanilla Unet method in the segmentation task. Furthermore, we visually demonstrate the effect of our modified decoder part. CONCLUSION The MSAL-Net shows superiority with a novel decoder to segment the touching and blurred background nuclei cells obtained from histopathology images with better performance for accurate decoding.
Collapse
Affiliation(s)
- Haider Ali
- School of Information Science and Technology, Northwest University, Xian, China
| | - Imran ul Haq
- School of Information Science and Technology, Northwest University, Xian, China
| | - Lei Cui
- School of Information Science and Technology, Northwest University, Xian, China
| | - Jun Feng
- School of Information Science and Technology, Northwest University, Xian, China
| |
Collapse
|