1
|
Pérez-Bueno F, Engan K, Molina R. Robust blind color deconvolution and blood detection on histological images using Bayesian K-SVD. Artif Intell Med 2024; 156:102969. [PMID: 39182468 DOI: 10.1016/j.artmed.2024.102969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 06/03/2024] [Accepted: 07/08/2024] [Indexed: 08/27/2024]
Abstract
Hematoxylin and Eosin (H&E) color variation among histological images from different laboratories can significantly degrade the performance of Computer-Aided Diagnosis systems. The staining procedure is the primary factor responsible for color variation, and consequently, the methods designed to reduce such variations are designed in concordance with this procedure. In particular, Blind Color Deconvolution (BCD) methods aim to identify the true underlying colors in the image and to separate the tissue structure from the color information. Unfortunately, BCD methods often assume that images are stained solely with pure staining colors (e.g., blue and pink for H&E). This assumption does not hold true when common artifacts such as blood are present, requiring an additional color component to represent them. This is a challenge for color standardization algorithms, which are unable to correctly identify the stains in the image, leading to unexpected results. In this work, we propose a Blood-Robust Bayesian K-Singular Value Decomposition model designed to simultaneously detect blood and extract color from histological images while preserving structural details. We evaluate our method using both synthetic and real images, which contain varying amounts of blood pixels.
Collapse
Affiliation(s)
- Fernando Pérez-Bueno
- Dpto. Ciencias de la Computación e Inteligencia Artificial, Universidad de Granada, Spain; Research Center for Information and Communication Technologies (CITIC-UGR), Spain.
| | - Kjersti Engan
- Department of Electrical Engineering and Computer Science, University of Stavanger, Norway.
| | - Rafael Molina
- Dpto. Ciencias de la Computación e Inteligencia Artificial, Universidad de Granada, Spain.
| |
Collapse
|
2
|
Cai C, Zhou Y, Jiao Y, Li L, Xu J. Prognostic Analysis Combining Histopathological Features and Clinical Information to Predict Colorectal Cancer Survival from Whole-Slide Images. Dig Dis Sci 2024; 69:2985-2995. [PMID: 38837111 DOI: 10.1007/s10620-024-08501-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 05/13/2024] [Indexed: 06/06/2024]
Abstract
BACKGROUND Colorectal cancer (CRC) is a malignant tumor within the digestive tract with both a high incidence rate and mortality. Early detection and intervention could improve patient clinical outcomes and survival. METHODS This study computationally investigates a set of prognostic tissue and cell features from diagnostic tissue slides. With the combination of clinical prognostic variables, the pathological image features could predict the prognosis in CRC patients. Our CRC prognosis prediction pipeline sequentially consisted of three modules: (1) A MultiTissue Net to delineate outlines of different tissue types within the WSI of CRC for further ROI selection by pathologists. (2) Development of three-level quantitative image metrics related to tissue compositions, cell shape, and hidden features from a deep network. (3) Fusion of multi-level features to build a prognostic CRC model for predicting survival for CRC. RESULTS Experimental results suggest that each group of features has a particular relationship with the prognosis of patients in the independent test set. In the fusion features combination experiment, the accuracy rate of predicting patients' prognosis and survival status is 81.52%, and the AUC value is 0.77. CONCLUSION This paper constructs a model that can predict the postoperative survival of patients by using image features and clinical information. Some features were found to be associated with the prognosis and survival of patients.
Collapse
Affiliation(s)
- Chengfei Cai
- School of Automation, Nanjing University of Information Science and Technology, Nanjing, 210044, China.
- College of Information Engineering, Taizhou University, Taizhou, 225300, China.
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China.
| | - Yangshu Zhou
- Department of Pathology, Zhujiang Hospital of Southern Medical University, Guangzhou, 510280, China
| | - Yiping Jiao
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Liang Li
- Department of Pathology, Nanfang Hospital of Southern Medical University, Guangzhou, 510515, China
| | - Jun Xu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| |
Collapse
|
3
|
Frewing A, Gibson AB, Robertson R, Urie PM, Corte DD. Don't Fear the Artificial Intelligence: A Systematic Review of Machine Learning for Prostate Cancer Detection in Pathology. Arch Pathol Lab Med 2024; 148:603-612. [PMID: 37594900 DOI: 10.5858/arpa.2022-0460-ra] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2023] [Indexed: 08/20/2023]
Abstract
CONTEXT Automated prostate cancer detection using machine learning technology has led to speculation that pathologists will soon be replaced by algorithms. This review covers the development of machine learning algorithms and their reported effectiveness specific to prostate cancer detection and Gleason grading. OBJECTIVE To examine current algorithms regarding their accuracy and classification abilities. We provide a general explanation of the technology and how it is being used in clinical practice. The challenges to the application of machine learning algorithms in clinical practice are also discussed. DATA SOURCES The literature for this review was identified and collected using a systematic search. Criteria were established prior to the sorting process to effectively direct the selection of studies. A 4-point system was implemented to rank the papers according to their relevancy. For papers accepted as relevant to our metrics, all cited and citing studies were also reviewed. Studies were then categorized based on whether they implemented binary or multi-class classification methods. Data were extracted from papers that contained accuracy, area under the curve (AUC), or κ values in the context of prostate cancer detection. The results were visually summarized to present accuracy trends between classification abilities. CONCLUSIONS It is more difficult to achieve high accuracy metrics for multiclassification tasks than for binary tasks. The clinical implementation of an algorithm that can assign a Gleason grade to clinical whole slide images (WSIs) remains elusive. Machine learning technology is currently not able to replace pathologists but can serve as an important safeguard against misdiagnosis.
Collapse
Affiliation(s)
- Aaryn Frewing
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Alexander B Gibson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Richard Robertson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Paul M Urie
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Dennis Della Corte
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| |
Collapse
|
4
|
Sun K, Zheng Y, Yang X, Jia W. A novel transformer-based aggregation model for predicting gene mutations in lung adenocarcinoma. Med Biol Eng Comput 2024; 62:1427-1440. [PMID: 38233683 DOI: 10.1007/s11517-023-03004-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 12/11/2023] [Indexed: 01/19/2024]
Abstract
In recent years, predicting gene mutations on whole slide imaging (WSI) has gained prominence. The primary challenge is extracting global information and achieving unbiased semantic aggregation. To address this challenge, we propose a novel Transformer-based aggregation model, employing a self-learning weight aggregation mechanism to mitigate semantic bias caused by the abundance of features in WSI. Additionally, we adopt a random patch training method, which enhances model learning richness by randomly extracting feature vectors from WSI, thus addressing the issue of limited data. To demonstrate the model's effectiveness in predicting gene mutations, we leverage the lung adenocarcinoma dataset from Shandong Provincial Hospital for prior knowledge learning. Subsequently, we assess TP53, CSMD3, LRP1B, and TTN gene mutations using lung adenocarcinoma tissue pathology images and clinical data from The Cancer Genome Atlas (TCGA). The results indicate a notable increase in the AUC (Area Under the ROC Curve) value, averaging 4%, attesting to the model's performance improvement. Our research offers an efficient model to explore the correlation between pathological image features and molecular characteristics in lung adenocarcinoma patients. This model introduces a novel approach to clinical genetic testing, expected to enhance the efficiency of identifying molecular features and genetic testing in lung adenocarcinoma patients, ultimately providing more accurate and reliable results for related studies.
Collapse
Affiliation(s)
- Kai Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, 250014, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, 250014, China.
| | - Xinbo Yang
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, 250014, China
| | - Weikuan Jia
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, 250014, China.
| |
Collapse
|
5
|
Gifani P, Shalbaf A. Transfer Learning with Pretrained Convolutional Neural Network for Automated Gleason Grading of Prostate Cancer Tissue Microarrays. JOURNAL OF MEDICAL SIGNALS & SENSORS 2024; 14:4. [PMID: 38510670 PMCID: PMC10950311 DOI: 10.4103/jmss.jmss_42_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 12/20/2022] [Accepted: 03/22/2023] [Indexed: 03/22/2024]
Abstract
Background The Gleason grading system has been the most effective prediction for prostate cancer patients. This grading system provides this possibility to assess prostate cancer's aggressiveness and then constitutes an important factor for stratification and therapeutic decisions. However, determining Gleason grade requires highly-trained pathologists and is time-consuming and tedious, and suffers from inter-pathologist variability. To remedy these limitations, this paper introduces an automatic methodology based on transfer learning with pretrained convolutional neural networks (CNNs) for automatic Gleason grading of prostate cancer tissue microarray (TMA). Methods Fifteen pretrained (CNNs): Efficient Nets (B0-B5), NasNetLarge, NasNetMobile, InceptionV3, ResNet-50, SeResnet 50, Xception, DenseNet121, ResNext50, and inception_resnet_v2 were fine-tuned on a dataset of prostate carcinoma TMA images. Six pathologists separately identified benign and cancerous areas for each prostate TMA image by allocating benign, 3, 4, or 5 Gleason grade for 244 patients. The dataset was labeled by these pathologists and majority vote was applied on pixel-wise annotations to obtain a unified label. Results Results showed the NasnetLarge architecture is the best model among them in the classification of prostate TMA images of 244 patients with accuracy of 0.93 and area under the curve of 0.98. Conclusion Our study can act as a highly trained pathologist to categorize the prostate cancer stages with more objective and reproducible results.
Collapse
Affiliation(s)
- Parisa Gifani
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Cancer Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
6
|
Sahiner M, Sunol AK, Sahiner N. Cell Staining Microgels Derived from a Natural Phenolic Dye: Hematoxylin Has Intriguing Biomedical Potential. Pharmaceutics 2024; 16:147. [PMID: 38276517 PMCID: PMC10818966 DOI: 10.3390/pharmaceutics16010147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 01/16/2024] [Accepted: 01/19/2024] [Indexed: 01/27/2024] Open
Abstract
Hematoxylin (HT) as a natural phenolic dye compound is generally used together with eosin (E) dye as H&E in the histological staining of tissues. Here, we report for the first time the polymeric particle preparation from HT as poly(Hematoxylin) ((p(HT)) microgels via microemulsion method in a one-step using a benign crosslinker, glycerol diglycidyl ether (GDE). P(HT) microgels are about 10 µm and spherical in shape with a zeta potential value of -34.6 ± 2.8 mV and an isoelectric point (IEP) of pH 1.79. Interestingly, fluorescence properties of HT molecules were retained upon microgel formation, e.g., the fluorescence emission intensity of p(HT) at 343 nm was about 2.8 times less than that of the HT molecule at λex: 300 nm. P(HT) microgels are hydrolytically degradable and can be controlled by using an amount of crosslinker, GDE, e.g., about 40%, 20%, and 10% of p(HT) microgels was degraded in 15 days in aqueous environments for the microgels prepared at 100, 200, and 300% mole ratios of GDE to HT, respectively. Interestingly, HT molecules at 1000 mg/mL showed 22.7 + 0.4% cell viability whereas the p(HT) microgels exhibited a cell viability of 94.3 + 7.2% against fibroblast cells. Furthermore, even at 2000 mg/mL concentrations of HT and p(HT), the inhibition% of α-glucosidase enzyme were measured as 93.2 ± 0.3 and 81.3 ± 6.3%, respectively at a 0.03 unit/mL enzyme concentration, establishing some potential application of p(HT) microgels for neurogenerative diseases. Moreover, p(HT) microgels showed two times higher MBC values than HT molecules, e.g., 5.0 versus 2.5 mg/mL MIC values against Gram-negative E. coli and Gram-positive S. aureus, respectively.
Collapse
Affiliation(s)
- Mehtap Sahiner
- Department of Bioengineering, Faculty of Engineering, Canakkale Onsekiz Mart University Terzioglu Campus, Canakkale 17100, Turkey;
- Department of Chemical & Biomedical Engineering, Materials Science and Engineering Program, University of South Florida, Tampa, FL 33620, USA;
| | - Aydin K. Sunol
- Department of Chemical & Biomedical Engineering, Materials Science and Engineering Program, University of South Florida, Tampa, FL 33620, USA;
| | - Nurettin Sahiner
- Department of Chemical & Biomedical Engineering, Materials Science and Engineering Program, University of South Florida, Tampa, FL 33620, USA;
- Department of Chemistry, Faculty of Sciences & Arts, and Nanoscience and Technology Research and Application Center (NANORAC), Canakkale Onsekiz Mart University Terzioglu Campus, Canakkale 17100, Turkey
- Department of Ophthalmology, Morsani College of Medicine, University of South Florida Eye Institute, 12901 Bruce B Down Blvd, MDC 21, Tampa, FL 33612, USA
| |
Collapse
|
7
|
Golfe A, Del Amor R, Colomer A, Sales MA, Terradez L, Naranjo V. ProGleason-GAN: Conditional progressive growing GAN for prostatic cancer Gleason grade patch synthesis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107695. [PMID: 37393742 DOI: 10.1016/j.cmpb.2023.107695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 06/06/2023] [Accepted: 06/24/2023] [Indexed: 07/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Prostate cancer is one of the most common diseases affecting men. The main diagnostic and prognostic reference tool is the Gleason scoring system. An expert pathologist assigns a Gleason grade to a sample of prostate tissue. As this process is very time-consuming, some artificial intelligence applications were developed to automatize it. The training process is often confronted with insufficient and unbalanced databases which affect the generalisability of the models. Therefore, the aim of this work is to develop a generative deep learning model capable of synthesising patches of any selected Gleason grade to perform data augmentation on unbalanced data and test the improvement of classification models. METHODOLOGY The methodology proposed in this work consists of a conditional Progressive Growing GAN (ProGleason-GAN) capable of synthesising prostate histopathological tissue patches by selecting the desired Gleason Grade cancer pattern in the synthetic sample. The conditional Gleason Grade information is introduced into the model through the embedding layers, so there is no need to add a term to the Wasserstein loss function. We used minibatch standard deviation and pixel normalisation to improve the performance and stability of the training process. RESULTS The reality assessment of the synthetic samples was performed with the Frechet Inception Distance (FID). We obtained an FID metric of 88.85 for non-cancerous patterns, 81.86 for GG3, 49.32 for GG4 and 108.69 for GG5 after post-processing stain normalisation. In addition, a group of expert pathologists was selected to perform an external validation of the proposed framework. Finally, the application of our proposed framework improved the classification results in SICAPv2 dataset, proving its effectiveness as a data augmentation method. CONCLUSIONS ProGleason-GAN approach combined with a stain normalisation post-processing provides state-of-the-art results regarding Frechet's Inception Distance. This model can synthesise samples of non-cancerous patterns, GG3, GG4 or GG5. The inclusion of conditional information about the Gleason grade during the training process allows the model to select the cancerous pattern in a synthetic sample. The proposed framework can be used as a data augmentation method.
Collapse
Affiliation(s)
- Alejandro Golfe
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-Tech), Universitat Politècnica de València, 46022, Spain.
| | - Rocío Del Amor
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-Tech), Universitat Politècnica de València, 46022, Spain
| | - Adrián Colomer
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-Tech), Universitat Politècnica de València, 46022, Spain; ValgrAI - Valencian Graduate School and Research Network for Artificial Intelligence, Spain
| | - María A Sales
- Anatomical Pathology Service, University Clinical Hospital of Valencia, Spain
| | - Liria Terradez
- Anatomical Pathology Service, University Clinical Hospital of Valencia, Spain
| | - Valery Naranjo
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-Tech), Universitat Politècnica de València, 46022, Spain
| |
Collapse
|
8
|
Fogarty R, Goldgof D, Hall L, Lopez A, Johnson J, Gadara M, Stoyanova R, Punnen S, Pollack A, Pow-Sang J, Balagurunathan Y. Classifying Malignancy in Prostate Glandular Structures from Biopsy Scans with Deep Learning. Cancers (Basel) 2023; 15:cancers15082335. [PMID: 37190264 DOI: 10.3390/cancers15082335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/07/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
Histopathological classification in prostate cancer remains a challenge with high dependence on the expert practitioner. We develop a deep learning (DL) model to identify the most prominent Gleason pattern in a highly curated data cohort and validate it on an independent dataset. The histology images are partitioned in tiles (14,509) and are curated by an expert to identify individual glandular structures with assigned primary Gleason pattern grades. We use transfer learning and fine-tuning approaches to compare several deep neural network architectures that are trained on a corpus of camera images (ImageNet) and tuned with histology examples to be context appropriate for histopathological discrimination with small samples. In our study, the best DL network is able to discriminate cancer grade (GS3/4) from benign with an accuracy of 91%, F1-score of 0.91 and AUC 0.96 in a baseline test (52 patients), while the cancer grade discrimination of the GS3 from GS4 had an accuracy of 68% and AUC of 0.71 (40 patients).
Collapse
Affiliation(s)
- Ryan Fogarty
- Department of Machine Learning, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Lawrence Hall
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Alex Lopez
- Tissue Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Joseph Johnson
- Analytic Microscopy Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Manoj Gadara
- Anatomic Pathology Division, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Quest Diagnostics, Tampa, FL 33612, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Sanoj Punnen
- Desai Sethi Urology Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Alan Pollack
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Julio Pow-Sang
- Genitourinary Cancers, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | | |
Collapse
|
9
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
10
|
Nishio M, Matsuo H, Kurata Y, Sugiyama O, Fujimoto K. Label Distribution Learning for Automatic Cancer Grading of Histopathological Images of Prostate Cancer. Cancers (Basel) 2023; 15:cancers15051535. [PMID: 36900325 PMCID: PMC10000939 DOI: 10.3390/cancers15051535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 02/25/2023] [Accepted: 02/26/2023] [Indexed: 03/05/2023] Open
Abstract
We aimed to develop and evaluate an automatic prediction system for grading histopathological images of prostate cancer. A total of 10,616 whole slide images (WSIs) of prostate tissue were used in this study. The WSIs from one institution (5160 WSIs) were used as the development set, while those from the other institution (5456 WSIs) were used as the unseen test set. Label distribution learning (LDL) was used to address a difference in label characteristics between the development and test sets. A combination of EfficientNet (a deep learning model) and LDL was utilized to develop an automatic prediction system. Quadratic weighted kappa (QWK) and accuracy in the test set were used as the evaluation metrics. The QWK and accuracy were compared between systems with and without LDL to evaluate the usefulness of LDL in system development. The QWK and accuracy were 0.364 and 0.407 in the systems with LDL and 0.240 and 0.247 in those without LDL, respectively. Thus, LDL improved the diagnostic performance of the automatic prediction system for the grading of histopathological images for cancer. By handling the difference in label characteristics using LDL, the diagnostic performance of the automatic prediction system could be improved for prostate cancer grading.
Collapse
Affiliation(s)
- Mizuho Nishio
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017, Japan
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan
- Correspondence: ; Tel.: +81-78-382-6104; Fax: +81-78-382-6129
| | - Hidetoshi Matsuo
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017, Japan
| | - Yasuhisa Kurata
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan
| | - Osamu Sugiyama
- Department of Informatics, Kindai University, 3-4-1 Kowakae, Higashiosaka City 577-8502, Japan
| | - Koji Fujimoto
- Department of Real World Data Research and Development, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan
| |
Collapse
|
11
|
Haque MIU, Mukherjee D, Stopka SA, Agar NYR, Hinkle J, Ovchinnikova OS. Deep Learning on Multimodal Chemical and Whole Slide Imaging Data for Predicting Prostate Cancer Directly from Tissue Images. JOURNAL OF THE AMERICAN SOCIETY FOR MASS SPECTROMETRY 2023; 34:227-235. [PMID: 36625762 PMCID: PMC10479534 DOI: 10.1021/jasms.2c00254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Prostate cancer is one of the most common cancers globally and is the second most common cancer in the male population in the US. Here we develop a study based on correlating the hematoxylin and eosin (H&E)-stained biopsy data with MALDI mass-spectrometric imaging data of the corresponding tissue to determine the cancerous regions and their unique chemical signatures and variations of the predicted regions with original pathological annotations. We obtain features from high-resolution optical micrographs of whole slide H&E stained data through deep learning and spatially register them with mass spectrometry imaging (MSI) data to correlate the chemical signature with the tissue anatomy of the data. We then use the learned correlation to predict prostate cancer from observed H&E images using trained coregistered MSI data. This multimodal approach can predict cancerous regions with ∼80% accuracy, which indicates a correlation between optical H&E features and chemical information found in MSI. We show that such paired multimodal data can be used for training feature extraction networks on H&E data which bypasses the need to acquire expensive MSI data and eliminates the need for manual annotation saving valuable time. Two chemical biomarkers were also found to be predicting the ground truth cancerous regions. This study shows promise in generating improved patient treatment trajectories by predicting prostate cancer directly from readily available H&E-stained biopsy images aided by coregistered MSI data.
Collapse
Affiliation(s)
- Md Inzamam Ul Haque
- The Bredesen Center, University of Tennessee, Knoxville, Tennessee 37996, United States
| | - Debangshu Mukherjee
- Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37830, United States
| | - Sylwia A Stopka
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts 02115, United States
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts 02115, United States
| | - Nathalie Y R Agar
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts 02115, United States
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts 02115, United States
- Department of Cancer Biology, Dana-Farber Cancer Institute, Boston, Massachusetts 02115, United States
| | - Jacob Hinkle
- Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37830, United States
| | - Olga S Ovchinnikova
- The Bredesen Center, University of Tennessee, Knoxville, Tennessee 37996, United States
| |
Collapse
|
12
|
Foucart A, Debeir O, Decaestecker C. Shortcomings and areas for improvement in digital pathology image segmentation challenges. Comput Med Imaging Graph 2023; 103:102155. [PMID: 36525770 DOI: 10.1016/j.compmedimag.2022.102155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 09/13/2022] [Accepted: 11/27/2022] [Indexed: 12/13/2022]
Abstract
Digital pathology image analysis challenges have been organised regularly since 2010, often with events hosted at major conferences and results published in high-impact journals. These challenges mobilise a lot of energy from organisers, participants, and expert annotators (especially for image segmentation challenges). This study reviews image segmentation challenges in digital pathology and the top-ranked methods, with a particular focus on how reference annotations are generated and how the methods' predictions are evaluated. We found important shortcomings in the handling of inter-expert disagreement and the relevance of the evaluation process chosen. We also noted key problems with the quality control of various challenge elements that can lead to uncertainties in the published results. Our findings show the importance of greatly increasing transparency in the reporting of challenge results, and the need to make publicly available the evaluation codes, test set annotations and participants' predictions. The aim is to properly ensure the reproducibility and interpretation of the results and to increase the potential for exploitation of the substantial work done in these challenges.
Collapse
Affiliation(s)
- Adrien Foucart
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium.
| | - Olivier Debeir
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium
| | - Christine Decaestecker
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium.
| |
Collapse
|
13
|
Ramamurthy K, Varikuti AR, Gupta B, Aswani N. A deep learning network for Gleason grading of prostate biopsies using EfficientNet. BIOMED ENG-BIOMED TE 2022; 68:187-198. [PMID: 36332194 DOI: 10.1515/bmt-2022-0201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 10/23/2022] [Indexed: 11/06/2022]
Abstract
Abstract
Objectives
The most crucial part in the diagnosis of cancer is severity grading. Gleason’s score is a widely used grading system for prostate cancer. Manual examination of the microscopic images and grading them is tiresome and consumes a lot of time. Hence to automate the Gleason grading process, a novel deep learning network is proposed in this work.
Methods
In this work, a deep learning network for Gleason grading of prostate cancer is proposed based on EfficientNet architecture. It applies a compound scaling method to balance the dimensions of the underlying network. Also, an additional attention branch is added to EfficientNet-B7 for precise feature weighting.
Result
To the best of our knowledge, this is the first work that integrates an additional attention branch with EfficientNet architecture for Gleason grading. The proposed models were trained using H&E-stained samples from prostate cancer Tissue Microarrays (TMAs) in the Harvard Dataverse dataset.
Conclusions
The proposed network was able to outperform the existing methods and it achieved an Kappa score of 0.5775.
Collapse
Affiliation(s)
- Karthik Ramamurthy
- Centre for Cyber Physical Systems, School of Electronics Engineering, Vellore Institute of Technology , Chennai , India
| | - Abinash Reddy Varikuti
- School of Computer Science Engineering, Vellore Institute of Technology , Chennai , India
| | - Bhavya Gupta
- School of Computer Science Engineering, Vellore Institute of Technology , Chennai , India
| | - Nehal Aswani
- School of Electronics Engineering, Vellore Institute of Technology , Chennai , India
| |
Collapse
|
14
|
Prostate cancer histopathology using label-free multispectral deep-UV microscopy quantifies phenotypes of tumor aggressiveness and enables multiple diagnostic virtual stains. Sci Rep 2022; 12:9329. [PMID: 35665770 PMCID: PMC9167293 DOI: 10.1038/s41598-022-13332-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 05/23/2022] [Indexed: 12/20/2022] Open
Abstract
Identifying prostate cancer patients that are harboring aggressive forms of prostate cancer remains a significant clinical challenge. Here we develop an approach based on multispectral deep-ultraviolet (UV) microscopy that provides novel quantitative insight into the aggressiveness and grade of this disease, thus providing a new tool to help address this important challenge. We find that UV spectral signatures from endogenous molecules give rise to a phenotypical continuum that provides unique structural insight (i.e., molecular maps or “optical stains") of thin tissue sections with subcellular (nanoscale) resolution. We show that this phenotypical continuum can also be applied as a surrogate biomarker of prostate cancer malignancy, where patients with the most aggressive tumors show a ubiquitous glandular phenotypical shift. In addition to providing several novel “optical stains” with contrast for disease, we also adapt a two-part Cycle-consistent Generative Adversarial Network to translate the label-free deep-UV images into virtual hematoxylin and eosin (H&E) stained images, thus providing multiple stains (including the gold-standard H&E) from the same unlabeled specimen. Agreement between the virtual H&E images and the H&E-stained tissue sections is evaluated by a panel of pathologists who find that the two modalities are in excellent agreement. This work has significant implications towards improving our ability to objectively quantify prostate cancer grade and aggressiveness, thus improving the management and clinical outcomes of prostate cancer patients. This same approach can also be applied broadly in other tumor types to achieve low-cost, stain-free, quantitative histopathological analysis.
Collapse
|
15
|
Mata C, Walker P, Oliver A, Martí J, Lalande A. Usefulness of Collaborative Work in the Evaluation of Prostate Cancer from MRI. Clin Pract 2022; 12:350-362. [PMID: 35645317 PMCID: PMC9149964 DOI: 10.3390/clinpract12030040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 05/06/2022] [Accepted: 05/09/2022] [Indexed: 11/16/2022] Open
Abstract
The aim of this study is to show the usefulness of collaborative work in the evaluation of prostate cancer from T2-weighted MRI using a dedicated software tool. The variability of annotations on images of the prostate gland (central and peripheral zones as well as tumour) by two independent experts was firstly evaluated, and secondly compared with a consensus between these two experts. Using a prostate MRI database, experts drew regions of interest (ROIs) corresponding to healthy prostate (peripheral and central zones) and cancer. One of the experts then drew the ROI with knowledge of the other expert’s ROI. The surface area of each ROI was used to measure the Hausdorff distance and the Dice coefficient was measured from the respective contours. They were evaluated between the different experiments, taking the annotations of the second expert as the reference. The results showed that the significant differences between the two experts disappeared with collaborative work. To conclude, this study shows that collaborative work with a dedicated tool allows consensus between expertise in the evaluation of prostate cancer from T2-weighted MRI.
Collapse
Affiliation(s)
- Christian Mata
- Pediatric Computational Imaging Research Group, Hospital Sant Joan de Déu, 08950 Esplugues de Llobregat, Spain
- Research Centre for Biomedical Engineering (CREB), Barcelona East School of Engineering, Universitat Politècnica de Catalunya, 08019 Barcelona, Spain
- Correspondence:
| | - Paul Walker
- ImViA Laboratory, Université de Bourgogne Franche-Comté, 64 Rue de Sully, 21000 Dijon, France; (P.W.); (A.L.)
| | - Arnau Oliver
- Institute of Computer Vision and Robotics, University of Girona, Campus Montilivi, Ed. P-IV, 17003 Girona, Spain; (A.O.); (J.M.)
| | - Joan Martí
- Institute of Computer Vision and Robotics, University of Girona, Campus Montilivi, Ed. P-IV, 17003 Girona, Spain; (A.O.); (J.M.)
| | - Alain Lalande
- ImViA Laboratory, Université de Bourgogne Franche-Comté, 64 Rue de Sully, 21000 Dijon, France; (P.W.); (A.L.)
| |
Collapse
|
16
|
Bankhead P. Developing image analysis methods for digital pathology. J Pathol 2022; 257:391-402. [PMID: 35481680 PMCID: PMC9324951 DOI: 10.1002/path.5921] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 04/22/2022] [Accepted: 04/25/2022] [Indexed: 12/04/2022]
Abstract
The potential to use quantitative image analysis and artificial intelligence is one of the driving forces behind digital pathology. However, despite novel image analysis methods for pathology being described across many publications, few become widely adopted and many are not applied in more than a single study. The explanation is often straightforward: software implementing the method is simply not available, or is too complex, incomplete, or dataset‐dependent for others to use. The result is a disconnect between what seems already possible in digital pathology based upon the literature, and what actually is possible for anyone wishing to apply it using currently available software. This review begins by introducing the main approaches and techniques involved in analysing pathology images. I then examine the practical challenges inherent in taking algorithms beyond proof‐of‐concept, from both a user and developer perspective. I describe the need for a collaborative and multidisciplinary approach to developing and validating meaningful new algorithms, and argue that openness, implementation, and usability deserve more attention among digital pathology researchers. The review ends with a discussion about how digital pathology could benefit from interacting with and learning from the wider bioimage analysis community, particularly with regard to sharing data, software, and ideas. © 2022 The Author. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Peter Bankhead
- Edinburgh Pathology, Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK.,Centre for Genomic & Experimental Medicine, Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK.,Cancer Research UK Edinburgh Centre, Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
17
|
Adeoye J, Akinshipo A, Thomson P, Su YX. Artificial intelligence-based prediction for cancer-related outcomes in Africa: Status and potential refinements. J Glob Health 2022; 12:03017. [PMID: 35493779 PMCID: PMC9022723 DOI: 10.7189/jogh.12.03017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- John Adeoye
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
- Oral Cancer Research Theme, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Abdulwarith Akinshipo
- Department of Oral and Maxillofacial Pathology and Biology, Faculty of Dentistry, University of Lagos, Lagos, Nigeria
| | - Peter Thomson
- College of Medicine and Dentistry, James Cook University, Cairns, Queensland, Australia
| | - Yu-Xiong Su
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
- Oral Cancer Research Theme, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
18
|
Prabhu S, Prasad K, Robels-Kelly A, Lu X. AI-based carcinoma detection and classification using histopathological images: A systematic review. Comput Biol Med 2022; 142:105209. [DOI: 10.1016/j.compbiomed.2022.105209] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 02/07/2023]
|
19
|
|
20
|
Bhattacharjee S, Ikromjanov K, Carole KS, Madusanka N, Cho NH, Hwang YB, Sumon RI, Kim HC, Choi HK. Cluster Analysis of Cell Nuclei in H&E-Stained Histological Sections of Prostate Cancer and Classification Based on Traditional and Modern Artificial Intelligence Techniques. Diagnostics (Basel) 2021; 12:diagnostics12010015. [PMID: 35054182 PMCID: PMC8774423 DOI: 10.3390/diagnostics12010015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 12/14/2021] [Accepted: 12/20/2021] [Indexed: 11/16/2022] Open
Abstract
Biomarker identification is very important to differentiate the grade groups in the histopathological sections of prostate cancer (PCa). Assessing the cluster of cell nuclei is essential for pathological investigation. In this study, we present a computer-based method for cluster analyses of cell nuclei and performed traditional (i.e., unsupervised method) and modern (i.e., supervised method) artificial intelligence (AI) techniques for distinguishing the grade groups of PCa. Two datasets on PCa were collected to carry out this research. Histopathology samples were obtained from whole slides stained with hematoxylin and eosin (H&E). In this research, state-of-the-art approaches were proposed for color normalization, cell nuclei segmentation, feature selection, and classification. A traditional minimum spanning tree (MST) algorithm was employed to identify the clusters and better capture the proliferation and community structure of cell nuclei. K-medoids clustering and stacked ensemble machine learning (ML) approaches were used to perform traditional and modern AI-based classification. The binary and multiclass classification was derived to compare the model quality and results between the grades of PCa. Furthermore, a comparative analysis was carried out between traditional and modern AI techniques using different performance metrics (i.e., statistical parameters). Cluster features of the cell nuclei can be useful information for cancer grading. However, further validation of cluster analysis is required to accomplish astounding classification results.
Collapse
Affiliation(s)
| | - Kobiljon Ikromjanov
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae 50834, Korea; (K.I.); (K.S.C.); (Y.-B.H.); (R.I.S.); (H.-C.K.)
| | - Kouayep Sonia Carole
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae 50834, Korea; (K.I.); (K.S.C.); (Y.-B.H.); (R.I.S.); (H.-C.K.)
| | - Nuwan Madusanka
- School of Computing & IT, Sri Lanka Technological Campus, Paduka 10500, Sri Lanka;
| | - Nam-Hoon Cho
- Department of Pathology, Yonsei University Hospital, Seoul 03722, Korea;
| | - Yeong-Byn Hwang
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae 50834, Korea; (K.I.); (K.S.C.); (Y.-B.H.); (R.I.S.); (H.-C.K.)
| | - Rashadul Islam Sumon
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae 50834, Korea; (K.I.); (K.S.C.); (Y.-B.H.); (R.I.S.); (H.-C.K.)
| | - Hee-Cheol Kim
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae 50834, Korea; (K.I.); (K.S.C.); (Y.-B.H.); (R.I.S.); (H.-C.K.)
| | - Heung-Kook Choi
- Department of Computer Engineering, u-AHRC, Inje University, Gimhae 50834, Korea;
- Correspondence: ; Tel.: +82-10-6733-3437
| |
Collapse
|