1
|
Wang YL, Gao S, Xiao Q, Li C, Grzegorzek M, Zhang YY, Li XH, Kang Y, Liu FH, Huang DH, Gong TT, Wu QJ. Role of artificial intelligence in digital pathology for gynecological cancers. Comput Struct Biotechnol J 2024; 24:205-212. [PMID: 38510535 PMCID: PMC10951449 DOI: 10.1016/j.csbj.2024.03.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 03/08/2024] [Accepted: 03/09/2024] [Indexed: 03/22/2024] Open
Abstract
The diagnosis of cancer is typically based on histopathological sections or biopsies on glass slides. Artificial intelligence (AI) approaches have greatly enhanced our ability to extract quantitative information from digital histopathology images as a rapid growth in oncology data. Gynecological cancers are major diseases affecting women's health worldwide. They are characterized by high mortality and poor prognosis, underscoring the critical importance of early detection, treatment, and identification of prognostic factors. This review highlights the various clinical applications of AI in gynecological cancers using digitized histopathology slides. Particularly, deep learning models have shown promise in accurately diagnosing, classifying histopathological subtypes, and predicting treatment response and prognosis. Furthermore, the integration with transcriptomics, proteomics, and other multi-omics techniques can provide valuable insights into the molecular features of diseases. Despite the considerable potential of AI, substantial challenges remain. Further improvements in data acquisition and model optimization are required, and the exploration of broader clinical applications, such as the biomarker discovery, need to be explored.
Collapse
Affiliation(s)
- Ya-Li Wang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Information Center, The Fourth Affiliated Hospital of China Medical University, Shenyang, China
| | - Song Gao
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qian Xiao
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Ying-Ying Zhang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Xiao-Han Li
- Department of Pathology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ye Kang
- Department of Pathology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Fang-Hua Liu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Dong-Hui Huang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ting-Ting Gong
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qi-Jun Wu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
- NHC Key Laboratory of Advanced Reproductive Medicine and Fertility (China Medical University), National Health Commission, Shenyang, China
| |
Collapse
|
2
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
3
|
Elforaici MEA, Montagnon E, Romero FP, Le WT, Azzi F, Trudel D, Nguyen B, Turcotte S, Tang A, Kadoury S. Semi-supervised ViT knowledge distillation network with style transfer normalization for colorectal liver metastases survival prediction. Med Image Anal 2024; 99:103346. [PMID: 39423564 DOI: 10.1016/j.media.2024.103346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/05/2024] [Accepted: 09/10/2024] [Indexed: 10/21/2024]
Abstract
Colorectal liver metastases (CLM) affect almost half of all colon cancer patients and the response to systemic chemotherapy plays a crucial role in patient survival. While oncologists typically use tumor grading scores, such as tumor regression grade (TRG), to establish an accurate prognosis on patient outcomes, including overall survival (OS) and time-to-recurrence (TTR), these traditional methods have several limitations. They are subjective, time-consuming, and require extensive expertise, which limits their scalability and reliability. Additionally, existing approaches for prognosis prediction using machine learning mostly rely on radiological imaging data, but recently histological images have been shown to be relevant for survival predictions by allowing to fully capture the complex microenvironmental and cellular characteristics of the tumor. To address these limitations, we propose an end-to-end approach for automated prognosis prediction using histology slides stained with Hematoxylin and Eosin (H&E) and Hematoxylin Phloxine Saffron (HPS). We first employ a Generative Adversarial Network (GAN) for slide normalization to reduce staining variations and improve the overall quality of the images that are used as input to our prediction pipeline. We propose a semi-supervised model to perform tissue classification from sparse annotations, producing segmentation and feature maps. Specifically, we use an attention-based approach that weighs the importance of different slide regions in producing the final classification results. Finally, we exploit the extracted features for the metastatic nodules and surrounding tissue to train a prognosis model. In parallel, we train a vision Transformer model in a knowledge distillation framework to replicate and enhance the performance of the prognosis prediction. We evaluate our approach on an in-house clinical dataset of 258 CLM patients, achieving superior performance compared to other comparative models with a c-index of 0.804 (0.014) for OS and 0.735 (0.016) for TTR, as well as on two public datasets. The proposed approach achieves an accuracy of 86.9% to 90.3% in predicting TRG dichotomization. For the 3-class TRG classification task, the proposed approach yields an accuracy of 78.5% to 82.1%, outperforming the comparative methods. Our proposed pipeline can provide automated prognosis for pathologists and oncologists, and can greatly promote precision medicine progress in managing CLM patients.
Collapse
Affiliation(s)
- Mohamed El Amine Elforaici
- MedICAL Laboratory, Polytechnique Montréal, Montreal, Canada; Centre de recherche du CHUM (CRCHUM), Montreal, Canada.
| | | | - Francisco Perdigón Romero
- MedICAL Laboratory, Polytechnique Montréal, Montreal, Canada; Centre de recherche du CHUM (CRCHUM), Montreal, Canada
| | - William Trung Le
- MedICAL Laboratory, Polytechnique Montréal, Montreal, Canada; Centre de recherche du CHUM (CRCHUM), Montreal, Canada
| | - Feryel Azzi
- Centre de recherche du CHUM (CRCHUM), Montreal, Canada
| | - Dominique Trudel
- Centre de recherche du CHUM (CRCHUM), Montreal, Canada; Université de Montréal, Montreal, Canada
| | | | - Simon Turcotte
- Centre de recherche du CHUM (CRCHUM), Montreal, Canada; Department of surgery, Université de Montréal, Montreal, Canada
| | - An Tang
- Centre de recherche du CHUM (CRCHUM), Montreal, Canada; Department of Radiology, Radiation Oncology and Nuclear Medicine, Université de Montréal, Montreal, Canada
| | - Samuel Kadoury
- MedICAL Laboratory, Polytechnique Montréal, Montreal, Canada; Centre de recherche du CHUM (CRCHUM), Montreal, Canada; Université de Montréal, Montreal, Canada
| |
Collapse
|
4
|
Escobar Díaz Guerrero R, Carvalho L, Bocklitz T, Popp J, Oliveira JL. A Data Augmentation Methodology to Reduce the Class Imbalance in Histopathology Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1767-1782. [PMID: 38485898 PMCID: PMC11300732 DOI: 10.1007/s10278-024-01018-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 12/26/2023] [Accepted: 12/28/2023] [Indexed: 08/07/2024]
Abstract
Deep learning techniques have recently yielded remarkable results across various fields. However, the quality of these results depends heavily on the quality and quantity of data used during the training phase. One common issue in multi-class and multi-label classification is class imbalance, where one or several classes make up a substantial portion of the total instances. This imbalance causes the neural network to prioritize features of the majority classes during training, as their detection leads to higher scores. In the context of object detection, two types of imbalance can be identified: (1) an imbalance between the space occupied by the foreground and background and (2) an imbalance in the number of instances for each class. This paper aims to address the second type of imbalance without exacerbating the first. To achieve this, we propose a modification of the copy-paste data augmentation technique, combined with weight-balancing methods in the loss function. This strategy was specifically tailored to improve the performance in datasets with a high instance density, where instance overlap could be detrimental. To validate our methodology, we applied it to a highly unbalanced dataset focused on nuclei detection. The results show that this hybrid approach improves the classification of minority classes without significantly compromising the performance of majority classes.
Collapse
Affiliation(s)
- Rodrigo Escobar Díaz Guerrero
- BMD Software, PCI - Creative Science Park, 3830-352, Ilhavo, Portugal.
- DETI/IEETA, University of Aveiro, 3810-193, Aveiro, Portugal.
- Leibniz Institute of Photonic Technology Jena, Member of Leibniz Research Alliance 'Health Technologies', Albert-Einstein-Straße 9, 07745, Jena, Germany.
| | - Lina Carvalho
- Institute of Anatomical and Molecular Pathology, Faculty of Medicine, University of Coimbra, 3004-504, Coimbra, Portugal
| | - Thomas Bocklitz
- Leibniz Institute of Photonic Technology Jena, Member of Leibniz Research Alliance 'Health Technologies', Albert-Einstein-Straße 9, 07745, Jena, Germany
- Institute of Physical Chemistry and Abbe Center of Photonics (IPC), Friedrich-Schiller-University, Jena, Germany
- Institute of Computer Science, Faculty of Mathematics, Physics & Computer Science, Bayreuth, Germany
| | - Juergen Popp
- Leibniz Institute of Photonic Technology Jena, Member of Leibniz Research Alliance 'Health Technologies', Albert-Einstein-Straße 9, 07745, Jena, Germany
- Institute of Physical Chemistry and Abbe Center of Photonics (IPC), Friedrich-Schiller-University, Jena, Germany
| | | |
Collapse
|
5
|
White BS, Woo XY, Koc S, Sheridan T, Neuhauser SB, Wang S, Evrard YA, Chen L, Foroughi pour A, Landua JD, Mashl RJ, Davies SR, Fang B, Rosa MG, Evans KW, Bailey MH, Chen Y, Xiao M, Rubinstein JC, Sanderson BJ, Lloyd MW, Domanskyi S, Dobrolecki LE, Fujita M, Fujimoto J, Xiao G, Fields RC, Mudd JL, Xu X, Hollingshead MG, Jiwani S, Acevedo S, Davis-Dusenbery BN, Robinson PN, Moscow JA, Doroshow JH, Mitsiades N, Kaochar S, Pan CX, Carvajal-Carmona LG, Welm AL, Welm BE, Govindan R, Li S, Davies MA, Roth JA, Meric-Bernstam F, Xie Y, Herlyn M, Ding L, Lewis MT, Bult CJ, Dean DA, Chuang JH. A Pan-Cancer Patient-Derived Xenograft Histology Image Repository with Genomic and Pathologic Annotations Enables Deep Learning Analysis. Cancer Res 2024; 84:2060-2072. [PMID: 39082680 PMCID: PMC11217732 DOI: 10.1158/0008-5472.can-23-1349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 10/13/2023] [Accepted: 03/27/2024] [Indexed: 08/04/2024]
Abstract
Patient-derived xenografts (PDX) model human intra- and intertumoral heterogeneity in the context of the intact tissue of immunocompromised mice. Histologic imaging via hematoxylin and eosin (H&E) staining is routinely performed on PDX samples, which could be harnessed for computational analysis. Prior studies of large clinical H&E image repositories have shown that deep learning analysis can identify intercellular and morphologic signals correlated with disease phenotype and therapeutic response. In this study, we developed an extensive, pan-cancer repository of >1,000 PDX and paired parental tumor H&E images. These images, curated from the PDX Development and Trial Centers Research Network Consortium, had a range of associated genomic and transcriptomic data, clinical metadata, pathologic assessments of cell composition, and, in several cases, detailed pathologic annotations of neoplastic, stromal, and necrotic regions. The amenability of these images to deep learning was highlighted through three applications: (i) development of a classifier for neoplastic, stromal, and necrotic regions; (ii) development of a predictor of xenograft-transplant lymphoproliferative disorder; and (iii) application of a published predictor of microsatellite instability. Together, this PDX Development and Trial Centers Research Network image repository provides a valuable resource for controlled digital pathology analysis, both for the evaluation of technical issues and for the development of computational image-based methods that make clinical predictions based on PDX treatment studies. Significance: A pan-cancer repository of >1,000 patient-derived xenograft hematoxylin and eosin-stained images will facilitate cancer biology investigations through histopathologic analysis and contributes important model system data that expand existing human histology repositories.
Collapse
Affiliation(s)
- Brian S. White
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | - Xing Yi Woo
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
- Bioinformatics Institute (BII), Agency for Science, Technology and Research (A*STAR), Singapore, Singapore.
| | - Soner Koc
- Velsera, Charlestown, Massachusetts.
| | - Todd Sheridan
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | | | - Shidan Wang
- University of Texas Southwestern Medical Center, Dallas, Texas.
| | - Yvonne A. Evrard
- Leidos Biomedical Research Inc., Frederick National Laboratory for Cancer Research, Frederick, Maryland.
| | - Li Chen
- Leidos Biomedical Research Inc., Frederick National Laboratory for Cancer Research, Frederick, Maryland.
| | - Ali Foroughi pour
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | | | - R. Jay Mashl
- Washington University School of Medicine, St. Louis, Missouri.
| | | | - Bingliang Fang
- University of Texas MD Anderson Cancer Center, Houston, Texas.
| | | | - Kurt W. Evans
- University of Texas MD Anderson Cancer Center, Houston, Texas.
| | - Matthew H. Bailey
- Simmons Center for Cancer Research, Brigham Young University, Provo, Utah.
| | - Yeqing Chen
- The Wistar Institute, Philadelphia, Pennsylvania.
| | - Min Xiao
- The Wistar Institute, Philadelphia, Pennsylvania.
| | | | | | | | - Sergii Domanskyi
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | | | - Maihi Fujita
- Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah.
| | - Junya Fujimoto
- University of Texas MD Anderson Cancer Center, Houston, Texas.
| | - Guanghua Xiao
- University of Texas Southwestern Medical Center, Dallas, Texas.
| | - Ryan C. Fields
- Washington University School of Medicine, St. Louis, Missouri.
| | | | - Xiaowei Xu
- The Wistar Institute, Philadelphia, Pennsylvania.
| | | | - Shahanawaz Jiwani
- Leidos Biomedical Research Inc., Frederick National Laboratory for Cancer Research, Frederick, Maryland.
| | | | | | | | - Peter N. Robinson
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | | | | | | | | | | | | | - Alana L. Welm
- Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah.
| | - Bryan E. Welm
- Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah.
| | | | - Shunqiang Li
- Washington University School of Medicine, St. Louis, Missouri.
| | | | - Jack A. Roth
- University of Texas MD Anderson Cancer Center, Houston, Texas.
| | | | - Yang Xie
- University of Texas Southwestern Medical Center, Dallas, Texas.
| | | | - Li Ding
- Washington University School of Medicine, St. Louis, Missouri.
| | | | | | | | - Jeffrey H. Chuang
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| |
Collapse
|
6
|
Seoni S, Shahini A, Meiburger KM, Marzola F, Rotunno G, Acharya UR, Molinari F, Salvi M. All you need is data preparation: A systematic review of image harmonization techniques in Multi-center/device studies for medical support systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108200. [PMID: 38677080 DOI: 10.1016/j.cmpb.2024.108200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 04/29/2024]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Alen Shahini
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Kristen M Meiburger
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Francesco Marzola
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Giulia Rotunno
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Centre for Health Research, University of Southern Queensland, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| |
Collapse
|
7
|
Niedowicz DM, Gollihue JL, Weekman EM, Phe P, Wilcock DM, Norris CM, Nelson PT. Using digital pathology to analyze the murine cerebrovasculature. J Cereb Blood Flow Metab 2024; 44:595-610. [PMID: 37988134 PMCID: PMC10981399 DOI: 10.1177/0271678x231216142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 10/18/2023] [Accepted: 10/23/2023] [Indexed: 11/22/2023]
Abstract
Research on the cerebrovasculature may provide insights into brain health and disease. Immunohistochemical staining is one way to visualize blood vessels, and digital pathology has the potential to revolutionize the measurement of blood vessel parameters. These tools provide opportunities for translational mouse model research. However, mouse brain tissue presents a formidable set of technical challenges, including potentially high background staining and cross-reactivity of endogenous IgG. Formalin-fixed paraffin-embedded (FFPE) and fixed frozen sections, both of which are widely used, may require different methods. In this study, we optimized blood vessel staining in mouse brain tissue, testing both FFPE and frozen fixed sections. A panel of immunohistochemical blood vessel markers were tested (including CD31, CD34, collagen IV, DP71, and VWF), to evaluate their suitability for digital pathological analysis. Collagen IV provided the best immunostaining results in both FFPE and frozen fixed murine brain sections, with highly-specific staining of large and small blood vessels and low background staining. Subsequent analysis of collagen IV-stained sections showed region and sex-specific differences in vessel density and vessel wall thickness. We conclude that digital pathology provides a useful tool for relatively unbiased analysis of the murine cerebrovasculature, provided proper protein markers are used.
Collapse
Affiliation(s)
- Dana M Niedowicz
- Sanders Brown Center on Aging, University of Kentucky, Lexington, KY, USA
| | - Jenna L Gollihue
- Sanders Brown Center on Aging, University of Kentucky, Lexington, KY, USA
| | - Erica M Weekman
- Stark Neurosciences Research Institute, Department of Neurology, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Panhavuth Phe
- Sanders Brown Center on Aging, University of Kentucky, Lexington, KY, USA
| | - Donna M Wilcock
- Stark Neurosciences Research Institute, Department of Neurology, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Christopher M Norris
- Sanders Brown Center on Aging, University of Kentucky, Lexington, KY, USA
- Department of Pharmacology, University of Kentucky, Lexington, KY, USA
| | - Peter T Nelson
- Sanders Brown Center on Aging, University of Kentucky, Lexington, KY, USA
- Department of Pathology, University of Kentucky, Lexington, KY, USA
| |
Collapse
|
8
|
Gambella A, Salvi M, Molinaro L, Patrono D, Cassoni P, Papotti M, Romagnoli R, Molinari F. Improved assessment of donor liver steatosis using Banff consensus recommendations and deep learning algorithms. J Hepatol 2024; 80:495-504. [PMID: 38036009 DOI: 10.1016/j.jhep.2023.11.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 10/23/2023] [Accepted: 11/03/2023] [Indexed: 12/02/2023]
Abstract
BACKGROUND & AIMS The Banff Liver Working Group recently published consensus recommendations for steatosis assessment in donor liver biopsy, but few studies reported their use and no automated deep-learning algorithms based on the proposed criteria have been developed so far. We evaluated Banff recommendations on a large monocentric series of donor liver needle biopsies by comparing pathologists' scores with those generated by convolutional neural networks (CNNs) we specifically developed for automated steatosis assessment. METHODS We retrospectively retrieved 292 allograft liver needle biopsies collected between January 2016 and January 2020 and performed steatosis assessment using a former intra-institution method (pre-Banff method) and the newly introduced Banff recommendations. Scores provided by pathologists and CNN models were then compared, and the degree of agreement was measured with the intraclass correlation coefficient (ICC). RESULTS Regarding the pre-Banff method, poor agreement was observed between the pathologist and CNN models for small droplet macrovesicular steatosis (ICC: 0.38), large droplet macrovesicular steatosis (ICC: 0.08), and the final combined score (ICC: 0.16) evaluation, but none of these reached statistically significance. Interestingly, significantly improved agreement was observed using the Banff approach: ICC was 0.93 for the low-power score (p <0.001), 0.89 for the high-power score (p <0.001), and 0.93 for the final score (p <0.001). Comparing the pre-Banff method with the Banff approach on the same biopsy, pathologist and CNN model assessment showed a mean (±SD) percentage of discrepancy of 26.89 (±22.16) and 1.20 (±5.58), respectively. CONCLUSIONS Our findings support the use of Banff recommendations in daily practice and highlight the need for a granular analysis of their effect on liver transplantation outcomes. IMPACT AND IMPLICATIONS We developed and validated the first automated deep-learning algorithms for standardized steatosis assessment based on the Banff Liver Working Group consensus recommendations. Our algorithm provides an unbiased automated evaluation of steatosis, which will lay the groundwork for granular analysis of steatosis's short- and long-term effects on organ viability, enabling the identification of clinically relevant steatosis cut-offs for donor organ acceptance. Implementing our algorithm in daily clinical practice will allow for a more efficient and safe allocation of donor organs, improving the post-transplant outcomes of patients.
Collapse
Affiliation(s)
- Alessandro Gambella
- Pathology Unit, Department of Medical Sciences, University of Turin, Turin, Italy; Division of Liver and Transplant Pathology, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.
| | - Massimo Salvi
- Department of Electronics and Telecommunications, PolitoBIOMed Lab, Politecnico di Torino, Biolab, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Luca Molinaro
- Division of Pathology, AOU Città Della Salute e Della Scienza di Torino, Turin, Italy
| | - Damiano Patrono
- General Surgery 2U, Liver Transplant Center, AOU Città Della Salute e Della Scienza di Torino, University of Turin, Turin, Italy
| | - Paola Cassoni
- Pathology Unit, Department of Medical Sciences, University of Turin, Turin, Italy
| | - Mauro Papotti
- Division of Pathology, Department of Oncology, University of Turin, Turin, Italy
| | - Renato Romagnoli
- General Surgery 2U, Liver Transplant Center, AOU Città Della Salute e Della Scienza di Torino, University of Turin, Turin, Italy
| | - Filippo Molinari
- Department of Electronics and Telecommunications, PolitoBIOMed Lab, Politecnico di Torino, Biolab, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| |
Collapse
|
9
|
Zhou H, Wang Y, Zhang B, Zhou C, Vonsky MS, Mitrofanova LB, Zou D, Li Q. Unsupervised domain adaptation for histopathology image segmentation with incomplete labels. Comput Biol Med 2024; 171:108226. [PMID: 38428096 DOI: 10.1016/j.compbiomed.2024.108226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 02/04/2024] [Accepted: 02/25/2024] [Indexed: 03/03/2024]
Abstract
Stain variations pose a major challenge to deep learning segmentation algorithms in histopathology images. Current unsupervised domain adaptation methods show promise in improving model generalization across diverse staining appearances but demand abundant accurately labeled source domain data. This paper assumes a novel scenario, namely, unsupervised domain adaptation based segmentation task with incompletely labeled source data. This paper propose a Stain-Adaptive Segmentation Network with Incomplete Labels (SASN-IL). Specifically, the algorithm consists of two stages. The first stage is an incomplete label correction stage, involving reliable model selection and label correction to rectify false-negative regions in incomplete labels. The second stage is the unsupervised domain adaptation stage, achieving segmentation on the target domain. In this stage, we introduce an adaptive stain transformation module, which adjusts the degree of transformation based on segmentation performance. We evaluate our method on a gastric cancer dataset, demonstrating significant improvements, with a 10.01% increase in Dice coefficient compared to the baseline and competitive performance relative to existing methods.
Collapse
Affiliation(s)
- Huihui Zhou
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, Shanghai 200241, China
| | - Yan Wang
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, Shanghai 200241, China
| | - Benyan Zhang
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Chunhua Zhou
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Maxim S Vonsky
- D.I. Mendeleev Institute for Metrology, St. Petersburg 190005, Russia
| | | | - Duowu Zou
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Qingli Li
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, Shanghai 200241, China.
| |
Collapse
|
10
|
Durán-Díaz I, Sarmiento A, Fondón I, Bodineau C, Tomé M, Durán RV. A Robust Method for the Unsupervised Scoring of Immunohistochemical Staining. ENTROPY (BASEL, SWITZERLAND) 2024; 26:165. [PMID: 38392420 PMCID: PMC10888407 DOI: 10.3390/e26020165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 02/02/2024] [Accepted: 02/07/2024] [Indexed: 02/24/2024]
Abstract
Immunohistochemistry is a powerful technique that is widely used in biomedical research and clinics; it allows one to determine the expression levels of some proteins of interest in tissue samples using color intensity due to the expression of biomarkers with specific antibodies. As such, immunohistochemical images are complex and their features are difficult to quantify. Recently, we proposed a novel method, including a first separation stage based on non-negative matrix factorization (NMF), that achieved good results. However, this method was highly dependent on the parameters that control sparseness and non-negativity, as well as on algorithm initialization. Furthermore, the previously proposed method required a reference image as a starting point for the NMF algorithm. In the present work, we propose a new, simpler and more robust method for the automated, unsupervised scoring of immunohistochemical images based on bright field. Our work is focused on images from tumor tissues marked with blue (nuclei) and brown (protein of interest) stains. The new proposed method represents a simpler approach that, on the one hand, avoids the use of NMF in the separation stage and, on the other hand, circumvents the need for a control image. This new approach determines the subspace spanned by the two colors of interest using principal component analysis (PCA) with dimension reduction. This subspace is a two-dimensional space, allowing for color vector determination by considering the point density peaks. A new scoring stage is also developed in our method that, again, avoids reference images, making the procedure more robust and less dependent on parameters. Semi-quantitative image scoring experiments using five categories exhibit promising and consistent results when compared to manual scoring carried out by experts.
Collapse
Affiliation(s)
- Iván Durán-Díaz
- Signal Theory and Communications Department, University of Seville, Avda. Descubrimientos S/N, 41092 Seville, Spain
| | - Auxiliadora Sarmiento
- Signal Theory and Communications Department, University of Seville, Avda. Descubrimientos S/N, 41092 Seville, Spain
| | - Irene Fondón
- Signal Theory and Communications Department, University of Seville, Avda. Descubrimientos S/N, 41092 Seville, Spain
| | - Clément Bodineau
- Department of Pathology, Brigham and Women's Hospital, Boston, MA 02115, USA
- Department of Genetics, Harvard Medical School, Boston, MA 02115, USA
| | - Mercedes Tomé
- Centro Andaluz de Biología Molecular y Medicina Regenerativa-CABIMER, Consejo Superior de Investigaciones Científicas, Universidad de Sevilla, Universidad Pablo de Olavide, 41092 Seville, Spain
| | - Raúl V Durán
- Centro Andaluz de Biología Molecular y Medicina Regenerativa-CABIMER, Consejo Superior de Investigaciones Científicas, Universidad de Sevilla, Universidad Pablo de Olavide, 41092 Seville, Spain
| |
Collapse
|
11
|
Voon W, Hum YC, Tee YK, Yap WS, Nisar H, Mokayed H, Gupta N, Lai KW. Evaluating the effectiveness of stain normalization techniques in automated grading of invasive ductal carcinoma histopathological images. Sci Rep 2023; 13:20518. [PMID: 37993544 PMCID: PMC10665422 DOI: 10.1038/s41598-023-46619-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 11/02/2023] [Indexed: 11/24/2023] Open
Abstract
Debates persist regarding the impact of Stain Normalization (SN) on recent breast cancer histopathological studies. While some studies propose no influence on classification outcomes, others argue for improvement. This study aims to assess the efficacy of SN in breast cancer histopathological classification, specifically focusing on Invasive Ductal Carcinoma (IDC) grading using Convolutional Neural Networks (CNNs). The null hypothesis asserts that SN has no effect on the accuracy of CNN-based IDC grading, while the alternative hypothesis suggests the contrary. We evaluated six SN techniques, with five templates selected as target images for the conventional SN techniques. We also utilized seven ImageNet pre-trained CNNs for IDC grading. The performance of models trained with and without SN was compared to discern the influence of SN on classification outcomes. The analysis unveiled a p-value of 0.11, indicating no statistically significant difference in Balanced Accuracy Scores between models trained with StainGAN-normalized images, achieving a score of 0.9196 (the best-performing SN technique), and models trained with non-normalized images, which scored 0.9308. As a result, we did not reject the null hypothesis, indicating that we found no evidence to support a significant discrepancy in effectiveness between stain-normalized and non-normalized datasets for IDC grading tasks. This study demonstrates that SN has a limited impact on IDC grading, challenging the assumption of performance enhancement through SN.
Collapse
Affiliation(s)
- Wingates Voon
- Department of Mechatronics and Biomedical Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia
| | - Yan Chai Hum
- Department of Mechatronics and Biomedical Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia.
| | - Yee Kai Tee
- Department of Mechatronics and Biomedical Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia
| | - Wun-She Yap
- Department of Electrical and Electronic Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia
| | - Humaira Nisar
- Department of Electronic Engineering, Faculty of Engineering and Green Technology, Universiti Tunku Abdul Rahman, 31900, Kampar, Malaysia
| | - Hamam Mokayed
- Department of Computer Science, Electrical and Space Engineering, Lulea University of Technology, Lulea, Sweden
| | - Neha Gupta
- School of Electronics Engineering, Vellore Institute of Technology, Amaravati, AP, India
| | - Khin Wee Lai
- Department of Biomedical Engineering, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| |
Collapse
|
12
|
Neary-Zajiczek L, Beresna L, Razavi B, Pawar V, Shaw M, Stoyanov D. Minimum resolution requirements of digital pathology images for accurate classification. Med Image Anal 2023; 89:102891. [PMID: 37536022 DOI: 10.1016/j.media.2023.102891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 05/22/2023] [Accepted: 07/06/2023] [Indexed: 08/05/2023]
Abstract
Digitization of pathology has been proposed as an essential mitigation strategy for the severe staffing crisis facing most pathology departments. Despite its benefits, several barriers have prevented widespread adoption of digital workflows, including cost and pathologist reluctance due to subjective image quality concerns. In this work, we quantitatively determine the minimum image quality requirements for binary classification of histopathology images of breast tissue in terms of spatial and sampling resolution. We train an ensemble of deep learning classifier models on publicly available datasets to obtain a baseline accuracy and computationally degrade these images according to our derived theoretical model to identify the minimum resolution necessary for acceptable diagnostic accuracy. Our results show that images can be degraded significantly below the resolution of most commercial whole-slide imaging systems while maintaining reasonable accuracy, demonstrating that macroscopic features are sufficient for binary classification of stained breast tissue. A rapid low-cost imaging system capable of identifying healthy tissue not requiring human assessment could serve as a triage system for reducing caseloads and alleviating the significant strain on the current workforce.
Collapse
Affiliation(s)
- Lydia Neary-Zajiczek
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, Charles Bell House, 43-45 Foley Street, Fitzrovia, London, W1W 7TS, United Kingdom; Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, United Kingdom.
| | - Linas Beresna
- Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, United Kingdom
| | - Benjamin Razavi
- University College London Medical School, 74 Huntley Street, London, WC1E 6BT, United Kingdom
| | - Vijay Pawar
- Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, United Kingdom
| | - Michael Shaw
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, Charles Bell House, 43-45 Foley Street, Fitzrovia, London, W1W 7TS, United Kingdom; Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, United Kingdom; National Physical Laboratory, Hampton Road, Teddington, TW11 0LW, United Kingdom
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, Charles Bell House, 43-45 Foley Street, Fitzrovia, London, W1W 7TS, United Kingdom; Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, United Kingdom
| |
Collapse
|
13
|
Ricaurte Archila L, Smith L, Sihvo HK, Koponen V, Jenkins SM, O'Sullivan DM, Cardenas Fernandez MC, Wang Y, Sivasubramaniam P, Patil A, Hopson PE, Absah I, Ravi K, Mounajjed T, Dellon ES, Bredenoord AJ, Pai R, Hartley CP, Graham RP, Moreira RK. Performance of an Artificial Intelligence Model for Recognition and Quantitation of Histologic Features of Eosinophilic Esophagitis on Biopsy Samples. Mod Pathol 2023; 36:100285. [PMID: 37474003 DOI: 10.1016/j.modpat.2023.100285] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 06/20/2023] [Accepted: 07/13/2023] [Indexed: 07/22/2023]
Abstract
We have developed an artificial intelligence (AI)-based digital pathology model for the evaluation of histologic features related to eosinophilic esophagitis (EoE). In this study, we evaluated the performance of our AI model in a cohort of pediatric and adult patients for histologic features included in the Eosinophilic Esophagitis Histologic Scoring System (EoEHSS). We collected a total of 203 esophageal biopsy samples from patients with mucosal eosinophilia of any degree (91 adult and 112 pediatric patients) and 10 normal controls from a prospectively maintained database. All cases were assessed by a specialized gastrointestinal (GI) pathologist for features in the EoEHSS at the time of original diagnosis and rescored by a central GI pathologist (R.K.M.). We subsequently analyzed whole-slide image digital slides using a supervised AI model operating in a cloud-based, deep learning AI platform (Aiforia Technologies) for peak eosinophil count (PEC) and several histopathologic features in the EoEHSS. The correlation and interobserver agreement between the AI model and pathologists (Pearson correlation coefficient [rs] = 0.89 and intraclass correlation coefficient [ICC] = 0.87 vs original pathologist; rs = 0.91 and ICC = 0.83 vs central pathologist) were similar to the correlation and interobserver agreement between pathologists for PEC (rs = 0.88 and ICC = 0.91) and broadly similar to those for most other histologic features in the EoEHSS. The AI model also accurately identified PEC of >15 eosinophils/high-power field by the original pathologist (area under the curve [AUC] = 0.98) and central pathologist (AUC = 0.98) and had similar AUCs for the presence of EoE-related endoscopic features to pathologists' assessment. Average eosinophils per epithelial unit area had similar performance compared to AI high-power field-based analysis. Our newly developed AI model can accurately identify, quantify, and score several of the main histopathologic features in the EoE spectrum, with agreement regarding EoEHSS scoring which was similar to that seen among GI pathologists.
Collapse
Affiliation(s)
| | | | | | | | - Sarah M Jenkins
- Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Donnchadh M O'Sullivan
- Department of Pediatric and Adolescence Medicine, Mayo Clinic, Rochester, Minnesota; Department of Gastroenterology and Hepatology, Mayo Clinic Rochester, Minnesota
| | - Maria Camila Cardenas Fernandez
- Department of Pediatric and Adolescence Medicine, Mayo Clinic, Rochester, Minnesota; Department of Gastroenterology and Hepatology, Mayo Clinic Rochester, Minnesota
| | - Yaohong Wang
- Department of Pathology, Vanderbilt University Medical Center, Nashville, Tennessee
| | | | - Ameya Patil
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - Puanani E Hopson
- Department of Pediatric and Adolescence Medicine, Mayo Clinic, Rochester, Minnesota; Department of Gastroenterology and Hepatology, Mayo Clinic Rochester, Minnesota
| | - Imad Absah
- Department of Pediatric and Adolescence Medicine, Mayo Clinic, Rochester, Minnesota; Department of Gastroenterology and Hepatology, Mayo Clinic Rochester, Minnesota
| | - Karthik Ravi
- Department of Gastroenterology and Hepatology, Mayo Clinic Rochester, Minnesota
| | - Taofic Mounajjed
- Department of Pathology, Allina Hospitals and Clinics, Minneapolis, Minnesota
| | - Evan S Dellon
- Division of Gastroenterology and Hepatology, Department of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina
| | - Albert J Bredenoord
- Department of Gastroenterology & Hepatology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Rish Pai
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Scottsdale, Arizona
| | | | - Rondell P Graham
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - Roger K Moreira
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota.
| |
Collapse
|
14
|
Martos O, Hoque MZ, Keskinarkaus A, Kemi N, Näpänkangas J, Eskuri M, Pohjanen VM, Kauppila JH, Seppänen T. Optimized detection and segmentation of nuclei in gastric cancer images using stain normalization and blurred artifact removal. Pathol Res Pract 2023; 248:154694. [PMID: 37494804 DOI: 10.1016/j.prp.2023.154694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 07/03/2023] [Accepted: 07/13/2023] [Indexed: 07/28/2023]
Abstract
Histological analysis with microscopy is the gold standard to diagnose and stage cancer, where slides or whole slide images are analyzed for cell morphological and spatial features by pathologists. The nuclei of cancerous cells are characterized by nonuniform chromatin distribution, irregular shapes, and varying size. As nucleus area and shape alone carry prognostic value, detection and segmentation of nuclei are among the most important steps in disease grading. However, evaluation of nuclei is a laborious, time-consuming, and subjective process with large variation among pathologists. Recent advances in digital pathology have allowed significant applications in nuclei detection, segmentation, and classification, but automated image analysis is greatly affected by staining factors, scanner variability, and imaging artifacts, requiring robust image preprocessing, normalization, and segmentation methods for clinically satisfactory results. In this paper, we aimed to evaluate and compare the digital image analysis techniques used in clinical pathology and research in the setting of gastric cancer. A literature review was conducted to evaluate potential methods of improving nuclei detection. Digitized images of 35 patients from a retrospective cohort of gastric adenocarcinoma at Oulu University Hospital in 1987-2016 were annotated for nuclei (n = 9085) by expert pathologists and 14 images of different cancer types from public TCGA dataset with annotated nuclei (n = 7000) were used as a comparison to evaluate applicability in other cancer types. The detection and segmentation accuracy with the selected color normalization and stain separation techniques were compared between the methods. The extracted information can be supplemented by patient's medical data and fed to the existing statistical clinical tools or subjected to subsequent AI-assisted classification and prediction models. The performance of each method is evaluated by several metrics against the annotations done by expert pathologists. The F1-measure of 0.854 ± 0.068 is achieved with color normalization for the gastric cancer dataset, and 0.907 ± 0.044 with color deconvolution for the public dataset, showing comparable results to the earlier state-of-the-art works. The developed techniques serve as a basis for further research on application and interpretability of AI-assisted tools for gastric cancer diagnosis.
Collapse
Affiliation(s)
- Oleg Martos
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Md Ziaul Hoque
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland.
| | - Anja Keskinarkaus
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Niko Kemi
- Department of Pathology, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Juha Näpänkangas
- Department of Pathology, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Maarit Eskuri
- Department of Pathology, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Vesa-Matti Pohjanen
- Department of Pathology, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Joonas H Kauppila
- Department of Surgery, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Tapio Seppänen
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| |
Collapse
|
15
|
Chu ML, Ge XYM, Eastham J, Nguyen T, Fuji RN, Sullivan R, Ruderman D. Assessment of Color Reproducibility and Mitigation of Color Variation in Whole Slide Image Scanners for Toxicologic Pathology. Toxicol Pathol 2023; 51:313-328. [PMID: 38288712 DOI: 10.1177/01926233231224468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
Digital pathology workflows in toxicologic pathology rely on whole slide images (WSIs) from histopathology slides. Inconsistent color reproduction by WSI scanners of different models and from different manufacturers can result in different color representations and inter-scanner color variation in the WSIs. Although pathologists can accommodate a range of color variation during their evaluation of WSIs, color variability can degrade the performance of computational applications in digital pathology. In particular, color variability can compromise the generalization of artificial intelligence applications to large volumes of data from diverse sources. To address these challenges, we developed a process that includes two modules: (1) assessing the color reproducibility of our scanners and the color variation among them and (2) applying color correction to WSIs to minimize the color deviation and variation. Our process ensures consistent color reproduction across WSI scanners and enhances color homogeneity in WSIs, and its flexibility enables easy integration as a post-processing step following scanning by WSI scanners of different models and from different manufacturers.
Collapse
Affiliation(s)
- Mei-Lan Chu
- Genentech Inc., South San Francisco, California, USA
| | - Xing-Yue M Ge
- Genentech Inc., South San Francisco, California, USA
| | | | - Trung Nguyen
- Genentech Inc., South San Francisco, California, USA
| | - Reina N Fuji
- Genentech Inc., South San Francisco, California, USA
| | - Ruth Sullivan
- Genentech Inc., South San Francisco, California, USA
| | | |
Collapse
|
16
|
Ye W, Chen X, Li P, Tao Y, Wang Z, Gao C, Cheng J, Li F, Yi D, Wei Z, Yi D, Wu Y. OEDL: an optimized ensemble deep learning method for the prediction of acute ischemic stroke prognoses using union features. Front Neurol 2023; 14:1158555. [PMID: 37416306 PMCID: PMC10321134 DOI: 10.3389/fneur.2023.1158555] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 05/22/2023] [Indexed: 07/08/2023] Open
Abstract
Background Early stroke prognosis assessments are critical for decision-making regarding therapeutic intervention. We introduced the concepts of data combination, method integration, and algorithm parallelization, aiming to build an integrated deep learning model based on a combination of clinical and radiomics features and analyze its application value in prognosis prediction. Methods The research steps in this study include data source and feature extraction, data processing and feature fusion, model building and optimization, model training, and so on. Using data from 441 stroke patients, clinical and radiomics features were extracted, and feature selection was performed. Clinical, radiomics, and combined features were included to construct predictive models. We applied the concept of deep integration to the joint analysis of multiple deep learning methods, used a metaheuristic algorithm to improve the parameter search efficiency, and finally, developed an acute ischemic stroke (AIS) prognosis prediction method, namely, the optimized ensemble of deep learning (OEDL) method. Results Among the clinical features, 17 features passed the correlation check. Among the radiomics features, 19 features were selected. In the comparison of the prediction performance of each method, the OEDL method based on the concept of ensemble optimization had the best classification performance. In the comparison to the predictive performance of each feature, the inclusion of the combined features resulted in better classification performance than that of the clinical and radiomics features. In the comparison to the prediction performance of each balanced method, SMOTEENN, which is based on a hybrid sampling method, achieved the best classification performance than that of the unbalanced, oversampled, and undersampled methods. The OEDL method with combined features and mixed sampling achieved the best classification performance, with 97.89, 95.74, 94.75, 94.03, and 94.35% for Macro-AUC, ACC, Macro-R, Macro-P, and Macro-F1, respectively, and achieved advanced performance in comparison with that of methods in previous studies. Conclusion The OEDL approach proposed herein could effectively achieve improved stroke prognosis prediction performance, the effect of using combined data modeling was significantly better than that of single clinical or radiomics feature models, and the proposed method had a better intervention guidance value. Our approach is beneficial for optimizing the early clinical intervention process and providing the necessary clinical decision support for personalized treatment.
Collapse
Affiliation(s)
- Wei Ye
- Department of Health Statistics, College of Preventive Medicine, Army Medical University, Chongqing, China
| | - Xicheng Chen
- Department of Health Statistics, College of Preventive Medicine, Army Medical University, Chongqing, China
| | - Pengpeng Li
- Department of Health Statistics, College of Preventive Medicine, Army Medical University, Chongqing, China
| | - Yongjun Tao
- Department of Neurology, Taizhou Municipal Hospital, Taizhou, Zhejiang, China
| | - Zhenyan Wang
- Department of Health Statistics, College of Preventive Medicine, Army Medical University, Chongqing, China
| | - Chengcheng Gao
- Department of Health Statistics, College of Preventive Medicine, Army Medical University, Chongqing, China
| | - Jian Cheng
- Department of Radiology, Taizhou Municipal Hospital, Taizhou, Zhejiang, China
| | - Fang Li
- Department of Health Statistics, College of Preventive Medicine, Army Medical University, Chongqing, China
| | - Dali Yi
- Department of Health Statistics, College of Preventive Medicine, Army Medical University, Chongqing, China
- Department of Health Education, College of Preventive Medicine, Army Medical University, Chongqing, China
| | - Zeliang Wei
- Department of Health Statistics, College of Preventive Medicine, Army Medical University, Chongqing, China
| | - Dong Yi
- Department of Health Statistics, College of Preventive Medicine, Army Medical University, Chongqing, China
| | - Yazhou Wu
- Department of Health Statistics, College of Preventive Medicine, Army Medical University, Chongqing, China
| |
Collapse
|
17
|
Dai H, Gao Q, Lu J, He L. Improving the Accuracy of Saffron Adulteration Classification and Quantification through Data Fusion of Thin-Layer Chromatography Imaging and Raman Spectral Analysis. Foods 2023; 12:2322. [PMID: 37372533 DOI: 10.3390/foods12122322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 06/02/2023] [Accepted: 06/07/2023] [Indexed: 06/29/2023] Open
Abstract
Agricultural crops of high value are frequently targeted by economic adulteration across the world. Saffron powder, being one of the most expensive spices and colorants on the market, is particularly vulnerable to adulteration with extraneous plant materials or synthetic colorants. However, the current international standard method has several drawbacks, such as being vulnerable to yellow artificial colorant adulteration and requiring tedious laboratory measuring procedures. To address these challenges, we previously developed a portable and versatile method for determining saffron quality using a thin-layer chromatography technique coupled with Raman spectroscopy (TLC-Raman). In this study, our aim was to improve the accuracy of the classification and quantification of adulterants in saffron by utilizing mid-level data fusion of TLC imaging and Raman spectral data. In summary, the featured imaging data and featured Raman data were concatenated into one data matrix. The classification and quantification results of saffron adulterants were compared between the fused data and the analysis based on each individual dataset. The best classification result was obtained from the partial least squares-discriminant analysis (PLS-DA) model developed using the mid-level fusion dataset, which accurately determined saffron with artificial adulterants (red 40 or yellow 5 at 2-10%, w/w) and natural plant adulterants (safflower and turmeric at 20-100%, w/w) with an overall accuracy of 99.52% and 99.20% in the training and validation group, respectively. Regarding quantification analysis, the PLS models built with the fused data block demonstrated improved quantification performance in terms of R2 and root-mean-square errors for most of the PLS models. In conclusion, the present study highlighted the significant potential of fusing TLC imaging data and Raman spectral data to improve saffron classification and quantification accuracy via the mid-level data fusion, which will facilitate rapid and accurate decision-making on site.
Collapse
Affiliation(s)
- Haochen Dai
- Chenoweth Laboratory, Department of Food Science, University of Massachusetts Amherst, 102 Holdsworth Way, Amherst, MA 01003, USA
| | - Qixiang Gao
- Chenoweth Laboratory, Department of Food Science, University of Massachusetts Amherst, 102 Holdsworth Way, Amherst, MA 01003, USA
| | - Jiakai Lu
- Chenoweth Laboratory, Department of Food Science, University of Massachusetts Amherst, 102 Holdsworth Way, Amherst, MA 01003, USA
| | - Lili He
- Chenoweth Laboratory, Department of Food Science, University of Massachusetts Amherst, 102 Holdsworth Way, Amherst, MA 01003, USA
- Department of Chemistry, University of Massachusetts, Amherst, MA 01002, USA
| |
Collapse
|
18
|
Altun N, Hervello MF, Lombó F, González P. Using staining as reference for spectral imaging: Its application for the development of an analytical method to predict the presence of bacterial biofilms. Talanta 2023; 261:124655. [PMID: 37196402 DOI: 10.1016/j.talanta.2023.124655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 04/25/2023] [Accepted: 05/05/2023] [Indexed: 05/19/2023]
Abstract
At present, although spectral imaging is known to have a great potential to provide a massive amount of valuable information, the lack of reference methods remains as one of the bottlenecks to access the full capacity of this technique. This work aims to present a staining-based reference method with digital image treatment for spectral imaging, in order to propose a fast, efficient, contactless and non-invasive analytical method to predict the presence of biofilms. Spectral images of Pseudomonasaeruginosa biofilms formed on high density polyethylene coupons were acquired in visible and near infrared (vis-NIR) range between 400 and 1000 nm. Crystal violet staining served as a biofilm indicator, allowing the bacterial cells and the extracellular matrix to be marked on the coupon. Treated digital images of the stained biofilms were used as a reference. The size and pixels of the hyperspectral and digital images were scaled and matched to each other. Intensity color thresholds were used to differentiate the pixels associate to areas containing biofilms from those ones placed in biofilm-free areas. The model facultative Gram-negative bacterium, P. aeruginosa, which can form highly irregularly shaped and heterogeneous biofilm structures, was used to enhance the strength of the method, due to its inherent difficulties. The results showed that the areas with high and low intensities were modeled with good performance, but the moderate intensity areas (with potentially weak or nascent biofilms) were quite challenging. Image processing and artificial neural networks (ANN) methods were performed to overcome the issues resulted from biofilm heterogeneity, as well as to train the spectral data for biofilm predictions.
Collapse
Affiliation(s)
- Nazan Altun
- ASINCAR Agrifood Technology Center, Spain; Research Unit "Biotechnology in Nutraceuticals and Bioactive Compounds-BIONUC", Departamento de Biología Funcional, Área de Microbiología, Universidad de Oviedo, Oviedo, Spain; Instituto Universitario de Oncología del Principado de Asturias (IUOPA), Oviedo, Spain; Instituto de Investigación Sanitaria del Principado de Asturias (ISPA), Oviedo, Spain
| | | | - Felipe Lombó
- Research Unit "Biotechnology in Nutraceuticals and Bioactive Compounds-BIONUC", Departamento de Biología Funcional, Área de Microbiología, Universidad de Oviedo, Oviedo, Spain; Instituto Universitario de Oncología del Principado de Asturias (IUOPA), Oviedo, Spain; Instituto de Investigación Sanitaria del Principado de Asturias (ISPA), Oviedo, Spain.
| | | |
Collapse
|
19
|
Marrón-Esquivel JM, Duran-Lopez L, Linares-Barranco A, Dominguez-Morales JP. A comparative study of the inter-observer variability on Gleason grading against Deep Learning-based approaches for prostate cancer. Comput Biol Med 2023; 159:106856. [PMID: 37075600 DOI: 10.1016/j.compbiomed.2023.106856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 02/07/2023] [Accepted: 03/30/2023] [Indexed: 04/08/2023]
Abstract
BACKGROUND Among all the cancers known today, prostate cancer is one of the most commonly diagnosed in men. With modern advances in medicine, its mortality has been considerably reduced. However, it is still a leading type of cancer in terms of deaths. The diagnosis of prostate cancer is mainly conducted by biopsy test. From this test, Whole Slide Images are obtained, from which pathologists diagnose the cancer according to the Gleason scale. Within this scale from 1 to 5, grade 3 and above is considered malignant tissue. Several studies have shown an inter-observer discrepancy between pathologists in assigning the value of the Gleason scale. Due to the recent advances in artificial intelligence, its application to the computational pathology field with the aim of supporting and providing a second opinion to the professional is of great interest. METHOD In this work, the inter-observer variability of a local dataset of 80 whole-slide images annotated by a team of 5 pathologists from the same group was analyzed at both area and label level. Four approaches were followed to train six different Convolutional Neural Network architectures, which were evaluated on the same dataset on which the inter-observer variability was analyzed. RESULTS An inter-observer variability of 0.6946 κ was obtained, with 46% discrepancy in terms of area size of the annotations performed by the pathologists. The best trained models achieved 0.826±0.014κ on the test set when trained with data from the same source. CONCLUSIONS The obtained results show that deep learning-based automatic diagnosis systems could help reduce the widely-known inter-observer variability that is present among pathologists and support them in their decision, serving as a second opinion or as a triage tool for medical centers.
Collapse
|
20
|
Dabass M, Dabass J. An Atrous Convolved Hybrid Seg-Net Model with residual and attention mechanism for gland detection and segmentation in histopathological images. Comput Biol Med 2023; 155:106690. [PMID: 36827788 DOI: 10.1016/j.compbiomed.2023.106690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 02/06/2023] [Accepted: 02/14/2023] [Indexed: 02/21/2023]
Abstract
PURPOSE A clinically compatible computerized segmentation model is presented here that aspires to supply clinical gland informative details by seizing every small and intricate variation in medical images, integrate second opinions, and reduce human errors. APPROACH It comprises of enhanced learning capability that extracts denser multi-scale gland-specific features, recover semantic gap during concatenation, and effectively handle resolution-degradation and vanishing gradient problems. It is having three proposed modules namely Atrous Convolved Residual Learning Module in the encoder as well as decoder, Residual Attention Module in the skip connection paths, and Atrous Convolved Transitional Module as the transitional and output layer. Also, pre-processing techniques like patch-sampling, stain-normalization, augmentation, etc. are employed to develop its generalization capability. To verify its robustness and invigorate network invariance against digital variability, extensive experiments are carried out employing three different public datasets i.e., GlaS (Gland Segmentation Challenge), CRAG (Colorectal Adenocarcinoma Gland) and LC-25000 (Lung Colon-25000) dataset and a private HosC (Hospital Colon) dataset. RESULTS The presented model accomplished combative gland detection outcomes having F1-score (GlaS(Test A(0.957), Test B(0.926)), CRAG(0.935), LC 25000(0.922), HosC(0.963)); and gland segmentation results having Object-Dice Index (GlaS(Test A(0.961), Test B(0.933)), CRAG(0.961), LC-25000(0.940), HosC(0.929)), and Object-Hausdorff Distance (GlaS(Test A(21.77) and Test B(69.74)), CRAG(87.63), LC-25000(95.85), HosC(83.29)). In addition, validation score (GlaS (Test A(0.945), Test B(0.937)), CRAG(0.934), LC-25000(0.911), HosC(0.928)) supplied by the proficient pathologists is integrated for the end segmentation results to corroborate the applicability and appropriateness for assistance at the clinical level applications. CONCLUSION The proposed system will assist pathologists in devising precise diagnoses by offering a referential perspective during morphology assessment of colon histopathology images.
Collapse
Affiliation(s)
- Manju Dabass
- EECE Deptt, The NorthCap University, Gurugram, India.
| | - Jyoti Dabass
- DBT Centre of Excellence Biopharmaceutical Technology, IIT, Delhi, India
| |
Collapse
|
21
|
Dhivya S, Mohanavalli S, Kavitha S. Automated carcinoma classification using efficient nuclei-based patch selection and deep learning techniques. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-222136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/04/2023]
Abstract
Breast cancer can be successfully treated if diagnosed at its earliest, though it is considered as a fatal disease among women. The histopathology slide turned images are the gold standard for tumor diagnosis. However, the manual diagnosis is still tedious due to its structural complexity. With the advent of computer-aided diagnosis, time and computation intensive manual procedure can be managed with the development of an automated classification system. The feature extraction and classification are quite challenging as these images involve complex structures and overlapping nuclei. A novel nuclei-based patch extraction method is proposed for the extraction of non-overlapping nuclei patches obtained from the breast tumor dataset. An ensemble of pre-trained models is used to extract the discriminating features from the identified and augmented non-overlapping nuclei patches. The discriminative features are further fused using p-norm pooling technique and are classified using a LightGBM classifier with 10-fold cross-validation. The obtained results showed an increase in the overall performance in terms of accuracy, sensitivity, specificity, and precision. The proposed framework yielded an accuracy of 98.3% for binary class classification and 95.1% for multi-class classification on ICIAR 2018 dataset.
Collapse
Affiliation(s)
- S. Dhivya
- Department of Information Technology, Sri Sivasubramaniya Nadar College of Engineering, Chennai, Tamil Nadu, India
| | - S. Mohanavalli
- Department of Information Technology, Sri Sivasubramaniya Nadar College of Engineering, Chennai, Tamil Nadu, India
| | - S. Kavitha
- Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering, Chennai, Tamil Nadu, India
| |
Collapse
|
22
|
Impact of Stain Normalization on Pathologist Assessment of Prostate Cancer: A Comparative Study. Cancers (Basel) 2023; 15:cancers15051503. [PMID: 36900293 PMCID: PMC10000688 DOI: 10.3390/cancers15051503] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 02/17/2023] [Accepted: 02/23/2023] [Indexed: 03/04/2023] Open
Abstract
In clinical routine, the quality of whole-slide images plays a key role in the pathologist's diagnosis, and suboptimal staining may be a limiting factor. The stain normalization process helps to solve this problem through the standardization of color appearance of a source image with respect to a target image with optimal chromatic features. The analysis is focused on the evaluation of the following parameters assessed by two experts on original and normalized slides: (i) perceived color quality, (ii) diagnosis for the patient, (iii) diagnostic confidence and (iv) time required for diagnosis. Results show a statistically significant increase in color quality in the normalized images for both experts (p < 0.0001). Regarding prostate cancer assessment, the average times for diagnosis are significantly lower for normalized images than original ones (first expert: 69.9 s vs. 77.9 s with p < 0.0001; second expert: 37.4 s vs. 52.7 s with p < 0.0001), and at the same time, a statistically significant increase in diagnostic confidence is proven. The improvement of poor-quality images and greater clarity of diagnostically important details in normalized slides demonstrate the potential of stain normalization in the routine practice of prostate cancer assessment.
Collapse
|
23
|
Vasiljević J, Nisar Z, Feuerhake F, Wemmert C, Lampert T. CycleGAN for virtual stain transfer: Is seeing really believing? Artif Intell Med 2022; 133:102420. [PMID: 36328671 DOI: 10.1016/j.artmed.2022.102420] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 03/16/2022] [Accepted: 10/02/2022] [Indexed: 01/18/2023]
Abstract
Digital Pathology is an area prone to high variation due to multiple factors which can strongly affect diagnostic quality and visual appearance of the Whole-Slide-Images (WSIs). The state-of-the art methods to deal with such variation tend to address this through style-transfer inspired approaches. Usually, these solutions directly apply successful approaches from the literature, potentially with some task-related modifications. The majority of the obtained results are visually convincing, however, this paper shows that this is not a guarantee that such images can be directly used for either medical diagnosis or reducing domain shift.This article shows that slight modification in a stain transfer architecture, such as a choice of normalisation layer, while resulting in a variety of visually appealing results, surprisingly greatly effects the ability of a stain transfer model to reduce domain shift. By extensive qualitative and quantitative evaluations, we confirm that translations resulting from different stain transfer architectures are distinct from each other and from the real samples. Therefore conclusions made by visual inspection or pretrained model evaluation might be misleading.
Collapse
Affiliation(s)
- Jelica Vasiljević
- ICube, University of Strasbourg, CNRS (UMR 7357), France; University of Belgrade, Belgrade, Serbia; Faculty of Science, University of Kragujevac, Kragujevac, Serbia.
| | - Zeeshan Nisar
- ICube, University of Strasbourg, CNRS (UMR 7357), France
| | - Friedrich Feuerhake
- Institute of Pathology, Hannover Medical School, Germany; University Clinic, Freiburg, Germany
| | - Cédric Wemmert
- ICube, University of Strasbourg, CNRS (UMR 7357), France
| | - Thomas Lampert
- ICube, University of Strasbourg, CNRS (UMR 7357), France
| |
Collapse
|
24
|
Dabass M, Vashisth S, Vig R. MTU: A multi-tasking U-net with hybrid convolutional learning and attention modules for cancer classification and gland Segmentation in Colon Histopathological Images. Comput Biol Med 2022; 150:106095. [PMID: 36179516 DOI: 10.1016/j.compbiomed.2022.106095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/31/2022] [Accepted: 09/10/2022] [Indexed: 11/17/2022]
Abstract
A clinically comparable multi-tasking computerized deep U-Net-based model is demonstrated in this paper. It intends to offer clinical gland morphometric information and cancer grade classification to be provided as referential opinions for pathologists in order to abate human errors. It embraces enhanced feature learning capability that aids in extraction of potent multi-scale features; efficacious semantic gap recovery during feature concatenation; and successful interception of resolution-degradation and vanishing gradient problems while performing moderate computations. It is proposed by integrating three unique novel structural components namely Hybrid Convolutional Learning Units in the encoder and decoder, Attention Learning Units in skip connection, and Multi-Scalar Dilated Transitional Unit as the transitional layer in the traditional U-Net architecture. These units are composed of the amalgamated phenomenon of multi-level convolutional learning through conventional, atrous, residual, depth-wise, and point-wise convolutions which are further incorporated with target-specific attention learning and enlarged effectual receptive field size. Also, pre-processing techniques of patch-sampling, augmentation (color and morphological), stain-normalization, etc. are employed to burgeon its generalizability. To build network invariance towards digital variability, exhaustive experiments are conducted using three public datasets (Colorectal Adenocarcinoma Gland (CRAG), Gland Segmentation (GlaS) challenge, and Lung Colon-25000 (LC-25K) dataset)) and then its robustness is verified using an in-house private dataset of Hospital Colon (HosC). For the cancer classification, the proposed model achieved results of Accuracy (CRAG(95%), GlaS(97.5%), LC-25K(99.97%), HosC(99.45%)), Precision (CRAG(0.9678), GlaS(0.9768), LC-25K(1), HosC(1)), F1-score (CRAG(0.968), GlaS(0.977), LC 25K(0.9997), HosC(0.9965)), and Recall (CRAG(0.9677), GlaS(0.9767), LC-25K(0.9994), HosC(0.9931)). For the gland detection and segmentation, the proposed model achieved competitive results of F1-score (CRAG(0.924), GlaS(Test A(0.949), Test B(0.918)), LC-25K(0.916), HosC(0.959)); Object-Dice Index (CRAG(0.959), GlaS(Test A(0.956), Test B(0.909)), LC-25K(0.929), HosC(0.922)), and Object-Hausdorff Distance (CRAG(90.47), GlaS(Test A(23.17), Test B(71.53)), LC-25K(96.28), HosC(85.45)). In addition, the activation mappings for testing the interpretability of the classification decision-making process are reported by utilizing techniques of Local Interpretable Model-Agnostic Explanations, Occlusion Sensitivity, and Gradient-Weighted Class Activation Mappings. This is done to provide further evidence about the model's self-learning capability of the comparable patterns considered relevant by pathologists without any pre-requisite for annotations. These activation mapping visualization outcomes are evaluated by proficient pathologists, and they delivered these images with a class-path validation score of (CRAG(9.31), GlaS(9.25), LC-25K(9.05), and HosC(9.85)). Furthermore, the seg-path validation score of (GlaS (Test A(9.40), Test B(9.25)), CRAG(9.27), LC-25K(9.01), HosC(9.19)) given by multiple pathologists is included for the final segmented outcomes to substantiate the clinical relevance and suitability for facilitation at the clinical level. The proposed model will aid pathologists to formulate an accurate diagnosis by providing a referential opinion during the morphology assessment of histopathology images. It will reduce unintentional human error in cancer diagnosis and consequently will enhance patient survival rate.
Collapse
Affiliation(s)
- Manju Dabass
- EECE Deptt, The NorthCap University, Gurugram, 122017, India.
| | - Sharda Vashisth
- EECE Deptt, The NorthCap University, Gurugram, 122017, India
| | - Rekha Vig
- EECE Deptt, The NorthCap University, Gurugram, 122017, India
| |
Collapse
|
25
|
Michielli N, Caputo A, Scotto M, Mogetta A, Pennisi OAM, Molinari F, Balmativola D, Bosco M, Gambella A, Metovic J, Tota D, Carpenito L, Gasparri P, Salvi M. Stain normalization in digital pathology: Clinical multi-center evaluation of image quality. J Pathol Inform 2022; 13:100145. [PMID: 36268060 PMCID: PMC9577129 DOI: 10.1016/j.jpi.2022.100145] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 09/14/2022] [Accepted: 09/22/2022] [Indexed: 11/20/2022] Open
Abstract
In digital pathology, the final appearance of digitized images is affected by several factors, resulting in stain color and intensity variation. Stain normalization is an innovative solution to overcome stain variability. However, the validation of color normalization tools has been assessed only from a quantitative perspective, through the computation of similarity metrics between the original and normalized images. To the best of our knowledge, no works investigate the impact of normalization on the pathologist's evaluation. The objective of this paper is to propose a multi-tissue (i.e., breast, colon, liver, lung, and prostate) and multi-center qualitative analysis of a stain normalization tool with the involvement of pathologists with different years of experience. Two qualitative studies were carried out for this purpose: (i) a first study focused on the analysis of the perceived image quality and absence of significant image artifacts after the normalization process; (ii) a second study focused on the clinical score of the normalized image with respect to the original one. The results of the first study prove the high quality of the normalized image with a low impact artifact generation, while the second study demonstrates the superiority of the normalized image with respect to the original one in clinical practice. The normalization process can help both to reduce variability due to tissue staining procedures and facilitate the pathologist in the histological examination. The experimental results obtained in this work are encouraging and can justify the use of a stain normalization tool in clinical routine.
Collapse
Affiliation(s)
- Nicola Michielli
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Alessandro Caputo
- Department of Medicine and Surgery, University Hospital of Salerno, Salerno, Italy
| | - Manuela Scotto
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Alessandro Mogetta
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Orazio Antonino Maria Pennisi
- Technology Transfer and Industrial Liaison Department, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Filippo Molinari
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Davide Balmativola
- Pathology Unit, Humanitas Gradenigo Hospital, Corso Regina Margherita 8, 10153 Turin, Italy
| | - Martino Bosco
- Department of Pathology, Michele and Pietro Ferrero Hospital, 12060 Verduno, Italy
| | - Alessandro Gambella
- Pathology Unit, Department of Medical Sciences, University of Turin, Via Santena 7, 10126 Turin, Italy
| | - Jasna Metovic
- Pathology Unit, Department of Medical Sciences, University of Turin, Via Santena 7, 10126 Turin, Italy
| | - Daniele Tota
- Pathology Unit, Department of Medical Sciences, University of Turin, Via Santena 7, 10126 Turin, Italy
| | - Laura Carpenito
- Department of Pathology, Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy
- University of Milan, Milan, Italy
| | - Paolo Gasparri
- UOC di Anatomia Patologica, ASP Catania P.O. “Gravina”, Caltagirone, Italy
| | - Massimo Salvi
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| |
Collapse
|
26
|
A convolution neural network with multi-level convolutional and attention learning for classification of cancer grades and tissue structures in colon histopathological images. Comput Biol Med 2022; 147:105680. [DOI: 10.1016/j.compbiomed.2022.105680] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 05/06/2022] [Accepted: 05/30/2022] [Indexed: 12/15/2022]
|
27
|
He W, Liu T, Han Y, Ming W, Du J, Liu Y, Yang Y, Wang L, Jiang Z, Wang Y, Yuan J, Cao C. A review: The detection of cancer cells in histopathology based on machine vision. Comput Biol Med 2022; 146:105636. [PMID: 35751182 DOI: 10.1016/j.compbiomed.2022.105636] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 04/04/2022] [Accepted: 04/28/2022] [Indexed: 12/24/2022]
Abstract
Machine vision is being employed in defect detection, size measurement, pattern recognition, image fusion, target tracking and 3D reconstruction. Traditional cancer detection methods are dominated by manual detection, which wastes time and manpower, and heavily relies on the pathologists' skill and work experience. Therefore, these manual detection approaches are not convenient for the inheritance of domain knowledge, and are not suitable for the rapid development of medical care in the future. The emergence of machine vision can iteratively update and learn the domain knowledge of cancer cell pathology detection to achieve automated, high-precision, and consistent detection. Consequently, this paper reviews the use of machine vision to detect cancer cells in histopathology images, as well as the benefits and drawbacks of various detection approaches. First, we review the application of image preprocessing and image segmentation in histopathology for the detection of cancer cells, and compare the benefits and drawbacks of different algorithms. Secondly, for the characteristics of histopathological cancer cell images, the research progress of shape, color and texture features and other methods is mainly reviewed. Furthermore, for the classification methods of histopathological cancer cell images, the benefits and drawbacks of traditional machine vision approaches and deep learning methods are compared and analyzed. Finally, the above research is discussed and forecasted, with the expected future development tendency serving as a guide for future research.
Collapse
Affiliation(s)
- Wenbin He
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Ting Liu
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongjie Han
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Wuyi Ming
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China.
| | - Jinguang Du
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yinxia Liu
- Laboratory Medicine of Dongguan Kanghua Hospital, Dongguan, 523808, China
| | - Yuan Yang
- Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, 510120, China.
| | - Leijie Wang
- School of Mechanical Engineering, Dongguan University of Technology Dongguan, 523808, China
| | - Zhiwen Jiang
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongqiang Wang
- Zhengzhou Coal Mining Machinery Group Co., Ltd, Zhengzhou, 450016, China
| | - Jie Yuan
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Chen Cao
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China
| |
Collapse
|
28
|
Park Y, Kim M, Ashraf M, Ko YS, Yi MY. MixPatch: A New Method for Training Histopathology Image Classifiers. Diagnostics (Basel) 2022; 12:diagnostics12061493. [PMID: 35741303 PMCID: PMC9221905 DOI: 10.3390/diagnostics12061493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 06/11/2022] [Accepted: 06/14/2022] [Indexed: 11/16/2022] Open
Abstract
CNN-based image processing has been actively applied to histopathological analysis to detect and classify cancerous tumors automatically. However, CNN-based classifiers generally predict a label with overconfidence, which becomes a serious problem in the medical domain. The objective of this study is to propose a new training method, called MixPatch, designed to improve a CNN-based classifier by specifically addressing the prediction uncertainty problem and examine its effectiveness in improving diagnosis performance in the context of histopathological image analysis. MixPatch generates and uses a new sub-training dataset, which consists of mixed-patches and their predefined ground-truth labels, for every single mini-batch. Mixed-patches are generated using a small size of clean patches confirmed by pathologists while their ground-truth labels are defined using a proportion-based soft labeling method. Our results obtained using a large histopathological image dataset shows that the proposed method performs better and alleviates overconfidence more effectively than any other method examined in the study. More specifically, our model showed 97.06% accuracy, an increase of 1.6% to 12.18%, while achieving 0.76% of expected calibration error, a decrease of 0.6% to 6.3%, over the other models. By specifically considering the mixed-region variation characteristics of histopathology images, MixPatch augments the extant mixed image methods for medical image analysis in which prediction uncertainty is a crucial issue. The proposed method provides a new way to systematically alleviate the overconfidence problem of CNN-based classifiers and improve their prediction accuracy, contributing toward more calibrated and reliable histopathology image analysis.
Collapse
Affiliation(s)
- Youngjin Park
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Mujin Kim
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Murtaza Ashraf
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Young Sin Ko
- Pathology Center, Seegene Medical Foundation, Seoul 04805, Korea;
| | - Mun Yong Yi
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
- Correspondence:
| |
Collapse
|
29
|
A Method for Unsupervised Semi-Quantification of Inmunohistochemical Staining with Beta Divergences. ENTROPY 2022; 24:e24040546. [PMID: 35455209 PMCID: PMC9029173 DOI: 10.3390/e24040546] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 04/05/2022] [Accepted: 04/06/2022] [Indexed: 12/10/2022]
Abstract
In many research laboratories, it is essential to determine the relative expression levels of some proteins of interest in tissue samples. The semi-quantitative scoring of a set of images consists of establishing a scale of scores ranging from zero or one to a maximum number set by the researcher and assigning a score to each image that should represent some predefined characteristic of the IHC staining, such as its intensity. However, manual scoring depends on the judgment of an observer and therefore exposes the assessment to a certain level of bias. In this work, we present a fully automatic and unsupervised method for comparative biomarker quantification in histopathological brightfield images. The method relies on a color separation method that discriminates between two chromogens expressed as brown and blue colors robustly, independent of color variation or biomarker expression level. For this purpose, we have adopted a two-stage stain separation approach in the optical density space. First, a preliminary separation is performed using a deconvolution method in which the color vectors of the stains are determined after an eigendecomposition of the data. Then, we adjust the separation using the non-negative matrix factorization method with beta divergences, initializing the algorithm with the matrices resulting from the previous step. After that, a feature vector of each image based on the intensity of the two chromogens is determined. Finally, the images are annotated using a systematically initialized k-means clustering algorithm with beta divergences. The method clearly defines the initial boundaries of the categories, although some flexibility is added. Experiments for the semi-quantitative scoring of images in five categories have been carried out by comparing the results with the scores of four expert researchers yielding accuracies that range between 76.60% and 94.58%. These results show that the proposed automatic scoring system, which is definable and reproducible, produces consistent results.
Collapse
|
30
|
Zhu J, Liu M, Li X. Progress on deep learning in digital pathology of breast cancer: a narrative review. Gland Surg 2022; 11:751-766. [PMID: 35531111 PMCID: PMC9068546 DOI: 10.21037/gs-22-11] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/04/2022] [Indexed: 01/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Pathology is the gold standard criteria for breast cancer diagnosis and has important guiding value in formulating the clinical treatment plan and predicting the prognosis. However, traditional microscopic examinations of tissue sections are time consuming and labor intensive, with unavoidable subjective variations. Deep learning (DL) can evaluate and extract the most important information from images with less need for human instruction, providing a promising approach to assist in the pathological diagnosis of breast cancer. To provide an informative and up-to-date summary on the topic of DL-based diagnostic systems for breast cancer pathology image analysis and discuss the advantages and challenges to the routine clinical application of digital pathology. METHODS A PubMed search with keywords ("breast neoplasm" or "breast cancer") and ("pathology" or "histopathology") and ("artificial intelligence" or "deep learning") was conducted. Relevant publications in English published from January 2000 to October 2021 were screened manually for their title, abstract, and even full text to determine their true relevance. References from the searched articles and other supplementary articles were also studied. KEY CONTENT AND FINDINGS DL-based computerized image analysis has obtained impressive achievements in breast cancer pathology diagnosis, classification, grading, staging, and prognostic prediction, providing powerful methods for faster, more reproducible, and more precise diagnoses. However, all artificial intelligence (AI)-assisted pathology diagnostic models are still in the experimental stage. Improving their economic efficiency and clinical adaptability are still required to be developed as the focus of further researches. CONCLUSIONS Having searched PubMed and other databases and summarized the application of DL-based AI models in breast cancer pathology, we conclude that DL is undoubtedly a promising tool for assisting pathologists in routines, but further studies are needed to realize the digitization and automation of clinical pathology.
Collapse
Affiliation(s)
- Jingjin Zhu
- School of Medicine, Nankai University, Tianjin, China
| | - Mei Liu
- Department of Pathology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiru Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| |
Collapse
|
31
|
Pérez-Bueno F, Serra JG, Vega M, Mateos J, Molina R, Katsaggelos AK. Bayesian K-SVD for H and E blind color deconvolution. Applications to stain normalization, data augmentation and cancer classification. Comput Med Imaging Graph 2022; 97:102048. [DOI: 10.1016/j.compmedimag.2022.102048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 11/04/2021] [Accepted: 02/05/2022] [Indexed: 12/17/2022]
|
32
|
Moghadam AZ, Azarnoush H, Seyyedsalehi SA, Havaei M. Stain transfer using Generative Adversarial Networks and disentangled features. Comput Biol Med 2022; 142:105219. [PMID: 35026572 DOI: 10.1016/j.compbiomed.2022.105219] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 12/22/2021] [Accepted: 01/03/2022] [Indexed: 12/13/2022]
Abstract
With the digitization of histopathology, machine learning algorithms have been developed to help pathologists. Color variation in histopathology images degrades the performance of these algorithms. Many models have been proposed to resolve the impact of color variation and transfer histopathology images to a single stain style. Major shortcomings include manual feature extraction, bias on a reference image, being limited to one style to one style transfer, dependence on style labels for source and target domains, and information loss. We propose two models, considering these shortcomings. Our main novelty is using Generative Adversarial Networks (GANs) along with feature disentanglement. The models extract color-related and structural features with neural networks; thus, features are not hand-crafted. Extracting features helps our models do many-to-one stain transformations and require only target-style labels. Our models also do not require a reference image by exploiting GAN. Our first model has one network per stain style transformation, while the second model uses only one network for many-to-many stain style transformations. We compare our models with six state-of-the-art models on the Mitosis-Atypia Dataset. Both proposed models achieved good results, but our second model outperforms other models based on the Histogram Intersection Score (HIS). Our proposed models were applied to three datasets to test their performance. The efficacy of our models was also evaluated on a classification task. Our second model obtained the best results in all the experiments with HIS of 0.88, 0.85, 0.75 for L-channel, a-channel, and b-channel, using the Mitosis-Atypia Dataset and accuracy of 90.3% for classification.
Collapse
Affiliation(s)
- Atefeh Ziaei Moghadam
- Department of Biomedical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran
| | - Hamed Azarnoush
- Department of Biomedical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran.
| | - Seyyed Ali Seyyedsalehi
- Department of Biomedical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran
| | | |
Collapse
|
33
|
Kang H, Luo D, Feng W, Zeng S, Quan T, Hu J, Liu X. StainNet: A Fast and Robust Stain Normalization Network. Front Med (Lausanne) 2021; 8:746307. [PMID: 34805215 PMCID: PMC8602577 DOI: 10.3389/fmed.2021.746307] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 10/04/2021] [Indexed: 01/31/2023] Open
Abstract
Stain normalization often refers to transferring the color distribution to the target image and has been widely used in biomedical image analysis. The conventional stain normalization usually achieves through a pixel-by-pixel color mapping model, which depends on one reference image, and it is hard to achieve accurately the style transformation between image datasets. In principle, this difficulty can be well-solved by deep learning-based methods, whereas, its complicated structure results in low computational efficiency and artifacts in the style transformation, which has restricted the practical application. Here, we use distillation learning to reduce the complexity of deep learning methods and a fast and robust network called StainNet to learn the color mapping between the source image and the target image. StainNet can learn the color mapping relationship from a whole dataset and adjust the color value in a pixel-to-pixel manner. The pixel-to-pixel manner restricts the network size and avoids artifacts in the style transformation. The results on the cytopathology and histopathology datasets show that StainNet can achieve comparable performance to the deep learning-based methods. Computation results demonstrate StainNet is more than 40 times faster than StainGAN and can normalize a 100,000 × 100,000 whole slide image in 40 s.
Collapse
Affiliation(s)
- Hongtao Kang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Ministry of Education (MOE) Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Die Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Ministry of Education (MOE) Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Weihua Feng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Ministry of Education (MOE) Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Ministry of Education (MOE) Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Ministry of Education (MOE) Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Junbo Hu
- Department of Pathology, Hubei Maternal and Child Health Hospital, Wuhan, China
| | - Xiuli Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Ministry of Education (MOE) Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
34
|
Pérez-Bueno F, Vega M, Sales MA, Aneiros-Fernández J, Naranjo V, Molina R, Katsaggelos AK. Blind color deconvolution, normalization, and classification of histological images using general super Gaussian priors and Bayesian inference. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106453. [PMID: 34649072 DOI: 10.1016/j.cmpb.2021.106453] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 10/01/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Color variations in digital histopathology severely impact the performance of computer-aided diagnosis systems. They are due to differences in the staining process and acquisition system, among other reasons. Blind color deconvolution techniques separate multi-stained images into single stained bands which, once normalized, can be used to eliminate these negative color variations and improve the performance of machine learning tasks. METHODS In this work, we decompose the observed RGB image in its hematoxylin and eosin components. We apply Bayesian modeling and inference based on the use of Super Gaussian sparse priors for each stain together with prior closeness to a given reference color-vector matrix. The hematoxylin and eosin components are then used for image normalization and classification of histological images. The proposed framework is tested on stain separation, image normalization, and cancer classification problems. The results are measured using the peak signal to noise ratio, normalized median intensity and the area under ROC curve on five different databases. RESULTS The obtained results show the superiority of our approach to current state-of-the-art blind color deconvolution techniques. In particular, the fidelity to the tissue improves 1,27 dB in mean PSNR. The normalized median intensity shows a good normalization quality of the proposed approach on the tested datasets. Finally, in cancer classification experiments the area under the ROC curve improves from 0.9491 to 0.9656 and from 0.9279 to 0.9541 on Camelyon-16 and Camelyon-17, respectively, when the original and processed images are used. Furthermore, these figures of merits are better than those obtained by the methods compared with. CONCLUSIONS The proposed framework for blind color deconvolution, normalization and classification of images guarantees fidelity to the tissue structure and can be used both for normalization and classification. In addition, color deconvolution enables the use of the optical density space for classification, which improves the classification performance.
Collapse
Affiliation(s)
- Fernando Pérez-Bueno
- Dpto. Ciencias de la Computación e Inteligencia Artificial, Universidad de Granada, Spain.
| | - Miguel Vega
- Dpto. de Lenguajes y Sistemas Informáticos, Universidad de Granada, Spain.
| | - María A Sales
- Anatomical Pathology Service, University Clinical Hospital of Valencia, Valencia, Spain.
| | - José Aneiros-Fernández
- Intercenter Unit of Pathological Anatomy, San Cecilio University Hospital, Granada, Spain.
| | - Valery Naranjo
- Dpto. de Comunicaciones, Universidad Politécnica de Valencia, Spain.
| | - Rafael Molina
- Dpto. Ciencias de la Computación e Inteligencia Artificial, Universidad de Granada, Spain.
| | - Aggelos K Katsaggelos
- Dept. of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL, USA.
| |
Collapse
|
35
|
Boschman J, Farahani H, Darbandsari A, Ahmadvand P, Van Spankeren A, Farnell D, Levine AB, Naso JR, Churg A, Jones SJ, Yip S, Köbel M, Huntsman DG, Gilks CB, Bashashati A. The utility of color normalization for AI-based diagnosis of hematoxylin and eosin-stained pathology images. J Pathol 2021; 256:15-24. [PMID: 34543435 DOI: 10.1002/path.5797] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 08/11/2021] [Accepted: 09/16/2021] [Indexed: 12/17/2022]
Abstract
The color variation of hematoxylin and eosin (H&E)-stained tissues has presented a challenge for applications of artificial intelligence (AI) in digital pathology. Many color normalization algorithms have been developed in recent years in order to reduce the color variation between H&E images. However, previous efforts in benchmarking these algorithms have produced conflicting results and none have sufficiently assessed the efficacy of the various color normalization methods for improving diagnostic performance of AI systems. In this study, we systematically investigated eight color normalization algorithms for AI-based classification of H&E-stained histopathology slides, in the context of using images both from one center and from multiple centers. Our results show that color normalization does not consistently improve classification performance when both training and testing data are from a single center. However, using four multi-center datasets of two cancer types (ovarian and pleural) and objective functions, we show that color normalization can significantly improve the classification accuracy of images from external datasets (ovarian cancer: 0.25 AUC increase, p = 1.6 e-05; pleural cancer: 0.21 AUC increase, p = 1.4 e-10). Furthermore, we introduce a novel augmentation strategy by mixing color-normalized images using three easily accessible algorithms that consistently improves the diagnosis of test images from external centers, even when the individual normalization methods had varied results. We anticipate our study to be a starting point for reliable use of color normalization to improve AI-based, digital pathology-empowered diagnosis of cancers sourced from multiple centers. © 2021 The Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Jeffrey Boschman
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Hossein Farahani
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.,Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Amirali Darbandsari
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Pouya Ahmadvand
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Ashley Van Spankeren
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - David Farnell
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,Vancouver General Hospital, Vancouver, BC, Canada
| | - Adrian B Levine
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,Vancouver General Hospital, Vancouver, BC, Canada
| | - Julia R Naso
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,Vancouver General Hospital, Vancouver, BC, Canada
| | - Andrew Churg
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,Vancouver General Hospital, Vancouver, BC, Canada
| | - Steven Jm Jones
- British Columbia Cancer Research Center, Vancouver, BC, Canada
| | - Stephen Yip
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,Vancouver General Hospital, Vancouver, BC, Canada
| | - Martin Köbel
- Department of Pathology and Laboratory Medicine, University of Calgary, Calgary, BC, Canada
| | - David G Huntsman
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,British Columbia Cancer Research Center, Vancouver, BC, Canada
| | - C Blake Gilks
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,Vancouver General Hospital, Vancouver, BC, Canada
| | - Ali Bashashati
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.,Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
36
|
Gomes J, Kong J, Kurc T, Melo ACMA, Ferreira R, Saltz JH, Teodoro G. Building robust pathology image analyses with uncertainty quantification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106291. [PMID: 34333205 DOI: 10.1016/j.cmpb.2021.106291] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 07/09/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Computerized pathology image analysis is an important tool in research and clinical settings, which enables quantitative tissue characterization and can assist a pathologist's evaluation. The aim of our study is to systematically quantify and minimize uncertainty in output of computer based pathology image analysis. METHODS Uncertainty quantification (UQ) and sensitivity analysis (SA) methods, such as Variance-Based Decomposition (VBD) and Morris One-At-a-Time (MOAT), are employed to track and quantify uncertainty in a real-world application with large Whole Slide Imaging datasets - 943 Breast Invasive Carcinoma (BRCA) and 381 Lung Squamous Cell Carcinoma (LUSC) patients. Because these studies are compute intensive, high-performance computing systems and efficient UQ/SA methods were combined to provide efficient execution. UQ/SA has been able to highlight parameters of the application that impact the results, as well as nuclear features that carry most of the uncertainty. Using this information, we built a method for selecting stable features that minimize application output uncertainty. RESULTS The results show that input parameter variations significantly impact all stages (segmentation, feature computation, and survival analysis) of the use case application. We then identified and classified features according to their robustness to parameter variation, and using the proposed features selection strategy, for instance, patient grouping stability in survival analysis has been improved from in 17% and 34% for BRCA and LUSC, respectively. CONCLUSIONS This strategy created more robust analyses, demonstrating that SA and UQ are important methods that may increase confidence digital pathology.
Collapse
Affiliation(s)
- Jeremias Gomes
- Department of Computer Science, University of Brasília, Brasília, Brazil
| | - Jun Kong
- Biomedical Informatics Department, Emory University, Atlanta, USA; Department of Biomedical Engineering, Emory-Georgia Institute of Technology, Atlanta, USA; Department of Mathematics and Statistics, Georgia State University, Atlanta, USA
| | - Tahsin Kurc
- Biomedical Informatics Department, Stony Brook University, Stony Brook, USA; Scientific Data Group, Oak Ridge National Laboratory, Oak Ridge, USA
| | - Alba C M A Melo
- Department of Computer Science, University of Brasília, Brasília, Brazil
| | - Renato Ferreira
- Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | - Joel H Saltz
- Biomedical Informatics Department, Stony Brook University, Stony Brook, USA
| | - George Teodoro
- Department of Computer Science, University of Brasília, Brasília, Brazil; Biomedical Informatics Department, Stony Brook University, Stony Brook, USA; Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.
| |
Collapse
|
37
|
Meng X, Li X, Wang X. A Computationally Virtual Histological Staining Method to Ovarian Cancer Tissue by Deep Generative Adversarial Networks. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:4244157. [PMID: 34306174 PMCID: PMC8270697 DOI: 10.1155/2021/4244157] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 06/10/2021] [Indexed: 11/21/2022]
Abstract
Histological analysis to tissue samples is elemental for diagnosing the risk and severity of ovarian cancer. The commonly used Hematoxylin and Eosin (H&E) staining method involves complex steps and strict requirements, which would seriously impact the research of histological analysis of the ovarian cancer. Virtual histological staining by the Generative Adversarial Network (GAN) provides a feasible way for these problems, yet it is still a challenge of using deep learning technology since the amounts of data available are quite limited for training. Based on the idea of GAN, we propose a weakly supervised learning method to generate autofluorescence images of unstained ovarian tissue sections corresponding to H&E staining sections of ovarian tissue. Using the above method, we constructed the supervision conditions for the virtual staining process, which makes the image quality synthesized in the subsequent virtual staining stage more perfect. Through the doctors' evaluation of our results, the accuracy of ovarian cancer unstained fluorescence image generated by our method reached 93%. At the same time, we evaluated the image quality of the generated images, where the FID reached 175.969, the IS score reached 1.311, and the MS reached 0.717. Based on the image-to-image translation method, we use the data set constructed in the previous step to implement a virtual staining method that is accurate to tissue cells. The accuracy of staining through the doctor's assessment reached 97%. At the same time, the accuracy of visual evaluation based on deep learning reached 95%.
Collapse
Affiliation(s)
- Xiangyu Meng
- College of Computer Science and Technology, China University of Petroleum, Qingdao, 266580 Shandong, China
- College of Computer and Information Science, Inner Mongolia Agricultural University, Huhhot, 010018 Inner Mongolia, China
| | - Xin Li
- Department of Gynecology 2, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei, China
| | - Xun Wang
- College of Computer Science and Technology, China University of Petroleum, Qingdao, 266580 Shandong, China
- China High Performance Computer Research Center, Institute of Computer Technology, Chinese Academy of Science, Beijing, 100190 Beijing, China
| |
Collapse
|
38
|
Salvi M, Bosco M, Molinaro L, Gambella A, Papotti M, Acharya UR, Molinari F. A hybrid deep learning approach for gland segmentation in prostate histopathological images. Artif Intell Med 2021; 115:102076. [PMID: 34001325 DOI: 10.1016/j.artmed.2021.102076] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 04/08/2021] [Accepted: 04/10/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND In digital pathology, the morphology and architecture of prostate glands have been routinely adopted by pathologists to evaluate the presence of cancer tissue. The manual annotations are operator-dependent, error-prone and time-consuming. The automated segmentation of prostate glands can be very challenging too due to large appearance variation and serious degeneration of these histological structures. METHOD A new image segmentation method, called RINGS (Rapid IdentificatioN of Glandural Structures), is presented to segment prostate glands in histopathological images. We designed a novel glands segmentation strategy using a multi-channel algorithm that exploits and fuses both traditional and deep learning techniques. Specifically, the proposed approach employs a hybrid segmentation strategy based on stroma detection to accurately detect and delineate the prostate glands contours. RESULTS Automated results are compared with manual annotations and seven state-of-the-art techniques designed for glands segmentation. Being based on stroma segmentation, no performance degradation is observed when segmenting healthy or pathological structures. Our method is able to delineate the prostate gland of the unknown histopathological image with a dice score of 90.16 % and outperforms all the compared state-of-the-art methods. CONCLUSIONS To the best of our knowledge, the RINGS algorithm is the first fully automated method capable of maintaining a high sensitivity even in the presence of severe glandular degeneration. The proposed method will help to detect the prostate glands accurately and assist the pathologists to make accurate diagnosis and treatment. The developed model can be used to support prostate cancer diagnosis in polyclinics and community care centres.
Collapse
Affiliation(s)
- Massimo Salvi
- Politecnico di Torino, PoliTo(BIO)Med Lab, Biolab, Department of Electronics and Telecommunications, Corso Duca degli Abruzzi 24, Turin, 10129, Italy.
| | - Martino Bosco
- San Lazzaro Hospital, Department of Pathology, Via Petrino Belli 26, Alba, 12051, Italy
| | - Luca Molinaro
- A.O.U. Città della Salute e della Scienza Hospital, Division of Pathology, Corso Bramante 88, Turin, 10126, Italy
| | - Alessandro Gambella
- A.O.U. Città della Salute e della Scienza Hospital, Division of Pathology, Corso Bramante 88, Turin, 10126, Italy
| | - Mauro Papotti
- University of Turin, Division of Pathology, Department of Oncology, Via Santena 5, Turin, 10126, Italy
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Clementi, 599491, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taiwan
| | - Filippo Molinari
- Politecnico di Torino, PoliTo(BIO)Med Lab, Biolab, Department of Electronics and Telecommunications, Corso Duca degli Abruzzi 24, Turin, 10129, Italy
| |
Collapse
|
39
|
Salvi M, Molinari F, Iussich S, Muscatello LV, Pazzini L, Benali S, Banco B, Abramo F, De Maria R, Aresu L. Histopathological Classification of Canine Cutaneous Round Cell Tumors Using Deep Learning: A Multi-Center Study. Front Vet Sci 2021; 8:640944. [PMID: 33869320 PMCID: PMC8044886 DOI: 10.3389/fvets.2021.640944] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Accepted: 03/08/2021] [Indexed: 01/12/2023] Open
Abstract
Canine cutaneous round cell tumors (RCT) represent one of the routine diagnostic challenges for veterinary pathologists. Computer-aided approaches are developed to overcome these restrictions and to increase accuracy and consistency of diagnosis. These systems are also of high benefit reducing errors when a large number of cases are screened daily. In this study we describe ARCTA (Automated Round Cell Tumors Assessment), a fully automated algorithm for cutaneous RCT classification and mast cell tumors grading in canine histopathological images. ARCTA employs a deep learning strategy and was developed on 416 RCT images and 213 mast cell tumors images. In the test set, our algorithm exhibited an excellent classification performance in both RCT classification (accuracy: 91.66%) and mast cell tumors grading (accuracy: 100%). Misdiagnoses were encountered for histiocytomas in the train set and for melanomas in the test set. For mast cell tumors the reduction of a grade was observed in the train set, but not in the test set. To the best of our knowledge, the proposed model is the first fully automated algorithm in histological images specifically developed for veterinary medicine. Being very fast (average computational time 2.63 s), this algorithm paves the way for an automated and effective evaluation of canine tumors.
Collapse
Affiliation(s)
- Massimo Salvi
- PoliToBIOMed Lab, Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Filippo Molinari
- PoliToBIOMed Lab, Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Selina Iussich
- Department of Veterinary Sciences, University of Turin, Turin, Italy
| | - Luisa Vera Muscatello
- Department of Veterinary Medical Sciences, University of Bologna, Bologna, Italy.,MyLav-Laboratorio La Vallonea, Milan, Italy
| | | | | | | | - Francesca Abramo
- Department of Veterinary Sciences, University of Pisa, Pisa, Italy
| | | | - Luca Aresu
- Department of Veterinary Sciences, University of Turin, Turin, Italy
| |
Collapse
|
40
|
Hoque MZ, Keskinarkaus A, Nyberg P, Seppänen T. Retinex model based stain normalization technique for whole slide image analysis. Comput Med Imaging Graph 2021; 90:101901. [PMID: 33862354 DOI: 10.1016/j.compmedimag.2021.101901] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 02/28/2021] [Accepted: 03/06/2021] [Indexed: 10/21/2022]
Abstract
Medical imaging provides the means for diagnosing many of the medical phenomena currently studied in clinical medicine and pathology. The variations of color and intensity in stained histological slides affect the quantitative analysis of the histopathological images. Moreover, stain normalization utilizing color for the classification of pixels into different stain components is challenging. The staining also suffers from variability, which complicates the automatization of tissue area segmentation with different staining and the analysis of whole slide images. We have developed a Retinex model based stain normalization technique in terms of area segmentation from stained tissue images to quantify the individual stain components of the histochemical stains for the ideal removal of variability. The performance was experimentally compared to reference methods and tested on organotypic carcinoma model based on myoma tissue and our method consistently has the smallest standard deviation, skewness value, and coefficient of variation in normalized median intensity measurements. Our method also achieved better quality performance in terms of Quaternion Structure Similarity Index Metric (QSSIM), Structural Similarity Index Metric (SSIM), and Pearson Correlation Coefficient (PCC) by improving robustness against variability and reproducibility. The proposed method could potentially be used in the development of novel research as well as diagnostic tools with the potential improvement of accuracy and consistency in computer aided diagnosis in biobank applications.
Collapse
Affiliation(s)
- Md Ziaul Hoque
- Physiological Signal Analysis Group, Center for Machine Vision and Signal Analysis, University of Oulu, Finland; Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland.
| | - Anja Keskinarkaus
- Physiological Signal Analysis Group, Center for Machine Vision and Signal Analysis, University of Oulu, Finland; Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Pia Nyberg
- Biobank Borealis of Northern Finland, Oulu University Hospital, Finland; Translational & Cancer Research Unit, Medical Research Center Oulu, Faculty of Medicine, University of Oulu, Finland
| | - Tapio Seppänen
- Physiological Signal Analysis Group, Center for Machine Vision and Signal Analysis, University of Oulu, Finland; Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| |
Collapse
|
41
|
Vijh S, Saraswat M, Kumar S. A new complete color normalization method for H&E stained histopatholgical images. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02231-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
42
|
Shin SJ, You SC, Jeon H, Jung JW, An MH, Park RW, Roh J. Style transfer strategy for developing a generalizable deep learning application in digital pathology. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105815. [PMID: 33160111 DOI: 10.1016/j.cmpb.2020.105815] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2020] [Accepted: 10/20/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVES Despite recent advances in artificial intelligence for medical images, the development of a robust deep learning model for identifying malignancy on pathology slides has been limited by problems related to substantial inter- and intra-institutional heterogeneity attributable to tissue preparation. The paucity of available data aggravates this limitation for relatively rare cancers. Here, using ovarian cancer pathology images, we explored the effect of image-to-image style transfer approaches on diagnostic performance. METHODS We leveraged a relatively large public image set for 142 patients with ovarian cancer from The Cancer Image Archive (TCIA) to fine-tune the renowned deep learning model Inception V3 for identifying malignancy on tissue slides. As an external validation, the performance of the developed classifier was tested using a relatively small institutional pathology image set for 32 patients. To reduce deterioration of the performance associated with the inter-institutional heterogeneity of pathology slides, we translated the style of the small image set of the local institution into the large image set style of the TCIA using cycle-consistent generative adversarial networks. RESULTS Without style transfer, the performance of the classifier was as follows: area under the receiver operating characteristic curve (AUROC) = 0.737 and area under the precision recall curve (AUPRC) = 0.710. After style transfer, AUROC and AUPRC improved to 0.916 and 0.898, respectively. CONCLUSIONS This study provides a case of the successful application of style transfer technology to generalize a deep learning model into small image sets in the field of digital pathology. Researchers at local institutions can select this collaborative system to make their small image sets acceptable to the deep learning model.
Collapse
Affiliation(s)
- Seo Jeong Shin
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Republic of Korea
| | - Seng Chan You
- Department of Biomedical Informatics, Ajou University School of Medicine, Suwon, Republic of Korea
| | - Hokyun Jeon
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Republic of Korea
| | - Ji Won Jung
- Department of Pathology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea; Asan Institute for Life Science, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Min Ho An
- So Ahn Public Health Center, Wando-gun, Jeollanam-do, Republic of Korea
| | - Rae Woong Park
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Republic of Korea; Department of Biomedical Informatics, Ajou University School of Medicine, Suwon, Republic of Korea.
| | - Jin Roh
- Department of Pathology, Ajou University Hospital, Suwon, Republic of Korea.
| |
Collapse
|
43
|
Salvi M, Acharya UR, Molinari F, Meiburger KM. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput Biol Med 2021; 128:104129. [DOI: 10.1016/j.compbiomed.2020.104129] [Citation(s) in RCA: 89] [Impact Index Per Article: 29.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|
44
|
Karpinski Score under Digital Investigation: A Fully Automated Segmentation Algorithm to Identify Vascular and Stromal Injury of Donors’ Kidneys. ELECTRONICS 2020. [DOI: 10.3390/electronics9101644] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
In kidney transplantations, the evaluation of the vascular structures and stromal areas is crucial for determining kidney acceptance, which is currently based on the pathologist’s visual evaluation. In this context, an accurate assessment of the vascular and stromal injury is fundamental to assessing the nephron status. In the present paper, the authors present a fully automated algorithm, called RENFAST (Rapid EvaluatioN of Fibrosis And vesselS Thickness), for the segmentation of kidney blood vessels and fibrosis in histopathological images. The proposed method employs a novel strategy based on deep learning to accurately segment blood vessels, while interstitial fibrosis is assessed using an adaptive stain separation method. The RENFAST algorithm is developed and tested on 350 periodic acid–Schiff (PAS) images for blood vessel segmentation and on 300 Massone’s trichrome (TRIC) stained images for the detection of renal fibrosis. In the TEST set, the algorithm exhibits excellent segmentation performance in both blood vessels (accuracy: 0.8936) and fibrosis (accuracy: 0.9227) and outperforms all the compared methods. To the best of our knowledge, the RENFAST algorithm is the first fully automated method capable of detecting both blood vessels and fibrosis in digital histological images. Being very fast (average computational time 2.91 s), this algorithm paves the way for automated, quantitative, and real-time kidney graft assessments.
Collapse
|
45
|
Salvi M, Molinaro L, Metovic J, Patrono D, Romagnoli R, Papotti M, Molinari F. Fully automated quantitative assessment of hepatic steatosis in liver transplants. Comput Biol Med 2020; 123:103836. [PMID: 32658781 DOI: 10.1016/j.compbiomed.2020.103836] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 05/25/2020] [Accepted: 05/25/2020] [Indexed: 02/07/2023]
Abstract
BACKGROUND The presence of macro- and microvesicular steatosis is one of the major risk factors for liver transplantation. An accurate assessment of the steatosis percentage is crucial for determining liver graft transplantability, which is currently based on the pathologists' visual evaluations on liver histology specimens. METHOD The aim of this study was to develop and validate a fully automated algorithm, called HEPASS (HEPatic Adaptive Steatosis Segmentation), for both micro- and macro-steatosis detection in digital liver histological images. The proposed method employs a hybrid deep learning framework, combining the accuracy of an adaptive threshold with the semantic segmentation of a deep convolutional neural network. Starting from all white regions, the HEPASS algorithm was able to detect lipid droplets and classify them into micro- or macrosteatosis. RESULTS The proposed method was developed and tested on 385 hematoxylin and eosin (H&E) stained images coming from 77 liver donors. Automated results were compared with manual annotations and nine state-of-the-art techniques designed for steatosis segmentation. In the TEST set, the algorithm was characterized by 97.27% accuracy in steatosis quantification (average error 1.07%, maximum average error 5.62%) and outperformed all the compared methods. CONCLUSIONS To the best of our knowledge, the proposed algorithm is the first fully automated algorithm for the assessment of both micro- and macrosteatosis in H&E stained liver tissue images. Being very fast (average computational time 0.72 s), this algorithm paves the way for automated, quantitative and real-time liver graft assessments.
Collapse
Affiliation(s)
- Massimo Salvi
- Politobiomed Lab, Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| | - Luca Molinaro
- Division of Pathology, AOU Città Della Salute e Della Scienza di Torino, Turin, Italy
| | - Jasna Metovic
- Division of Pathology, Department of Oncology, University of Turin, Turin, Italy
| | - Damiano Patrono
- General Surgery 2U, Liver Transplant Center, AOU Città Della Salute e Della Scienza di Torino, University of Turin, Turin, Italy
| | - Renato Romagnoli
- General Surgery 2U, Liver Transplant Center, AOU Città Della Salute e Della Scienza di Torino, University of Turin, Turin, Italy
| | - Mauro Papotti
- Division of Pathology, Department of Oncology, University of Turin, Turin, Italy
| | - Filippo Molinari
- Politobiomed Lab, Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| |
Collapse
|