1
|
Hoyer DP, Ting S, Rogacka N, Koitka S, Hosch R, Flaschel N, Haubold J, Malamutmann E, Stüben BO, Treckmann J, Nensa F, Baldini G. AI-based digital histopathology for perihilar cholangiocarcinoma: A step, not a jump. J Pathol Inform 2024; 15:100345. [PMID: 38075015 PMCID: PMC10698537 DOI: 10.1016/j.jpi.2023.100345] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/06/2023] [Accepted: 11/01/2023] [Indexed: 10/23/2024] Open
Abstract
INTRODUCTION Perihilar cholangiocarcinoma (PHCC) is a rare malignancy with limited survival prediction accuracy. Artificial intelligence (AI) and digital pathology advancements have shown promise in predicting outcomes in cancer. We aimed to improve prognosis prediction for PHCC by combining AI-based histopathological slide analysis with clinical factors. METHODS We retrospectively analyzed 317 surgically treated PHCC patients (January 2009-December 2018) at the University Hospital of Essen. Clinical data, surgical details, pathology, and outcomes were collected. Convolutional neural networks (CNN) analyzed whole-slide images. Survival models incorporated clinical and histological features. RESULTS Among 142 eligible patients, independent survival predictors were tumor grade (G), tumor size (T), and intraoperative transfusion requirement. The CNN-based model combining clinical and histopathological features demonstrates proof of concept in prognosis prediction, limited by histopathological complexity and feature extraction challenges. However, the CNN-based model generated heatmaps assisting pathologists in identifying areas of interest. CONCLUSION AI-based digital pathology showed potential in PHCC prognosis prediction, though refinement is necessary for clinical relevance. Future research should focus on enhancing AI models and exploring novel approaches to improve PHCC patient prognosis prediction.
Collapse
Affiliation(s)
- Dieter P. Hoyer
- University Hospital Essen, Department of General, Visceral and Transplantation Surgery, Essen, Germany
| | - Saskia Ting
- University Hospital Essen, Institute for Pathology and Neuropathology, Essen, Germany
- Institute of Pathology Nordhessen, Kassel, Germany
| | - Nina Rogacka
- University Hospital Essen, Department of General, Visceral and Transplantation Surgery, Essen, Germany
| | - Sven Koitka
- University Hospital Essen, Institute of Interventional and Diagnostic Radiology and Neuroradiology, Essen, Germany
- University Hospital Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
| | - René Hosch
- University Hospital Essen, Institute of Interventional and Diagnostic Radiology and Neuroradiology, Essen, Germany
- University Hospital Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
| | - Nils Flaschel
- University Hospital Essen, Institute of Interventional and Diagnostic Radiology and Neuroradiology, Essen, Germany
- University Hospital Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
| | - Johannes Haubold
- University Hospital Essen, Institute of Interventional and Diagnostic Radiology and Neuroradiology, Essen, Germany
- University Hospital Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
| | - Eugen Malamutmann
- University Hospital Essen, Department of General, Visceral and Transplantation Surgery, Essen, Germany
| | - Björn-Ole Stüben
- University Hospital Essen, Department of General, Visceral and Transplantation Surgery, Essen, Germany
| | - Jürgen Treckmann
- University Hospital Essen, Department of General, Visceral and Transplantation Surgery, Essen, Germany
| | - Felix Nensa
- University Hospital Essen, Institute of Interventional and Diagnostic Radiology and Neuroradiology, Essen, Germany
- University Hospital Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
| | - Giulia Baldini
- University Hospital Essen, Institute of Interventional and Diagnostic Radiology and Neuroradiology, Essen, Germany
- University Hospital Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
| |
Collapse
|
2
|
Neri F, Takajjart SN, Lerner CA, Desprez PY, Schilling B, Campisi J, Gerencser AA. A Fully-Automated Senescence Test (FAST) for the high-throughput quantification of senescence-associated markers. GeroScience 2024; 46:4185-4202. [PMID: 38869711 PMCID: PMC11336018 DOI: 10.1007/s11357-024-01167-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 03/15/2024] [Indexed: 06/14/2024] Open
Abstract
Cellular senescence is a major driver of aging and age-related diseases. Quantification of senescent cells remains challenging due to the lack of senescence-specific markers and generalist, unbiased methodology. Here, we describe the Fully-Automated Senescence Test (FAST), an image-based method for the high-throughput, single-cell assessment of senescence in cultured cells. FAST quantifies three of the most widely adopted senescence-associated markers for each cell imaged: senescence-associated β-galactosidase activity (SA-β-Gal) using X-Gal, proliferation arrest via lack of 5-ethynyl-2'-deoxyuridine (EdU) incorporation, and enlarged morphology via increased nuclear area. The presented workflow entails microplate image acquisition, image processing, data analysis, and graphing. Standardization was achieved by (i) quantifying colorimetric SA-β-Gal via optical density; (ii) implementing staining background controls; and (iii) automating image acquisition, image processing, and data analysis. In addition to the automated threshold-based scoring, a multivariate machine learning approach is provided. We show that FAST accurately quantifies senescence burden and is agnostic to cell type and microscope setup. Moreover, it effectively mitigates false-positive senescence marker staining, a common issue arising from culturing conditions. Using FAST, we compared X-Gal with fluorescent C12FDG live-cell SA-β-Gal staining on the single-cell level. We observed only a modest correlation between the two, indicating that those stains are not trivially interchangeable. Finally, we provide proof of concept that our method is suitable for screening compounds that modify senescence burden. This method will be broadly useful to the aging field by enabling rapid, unbiased, and user-friendly quantification of senescence burden in culture, as well as facilitating large-scale experiments that were previously impractical.
Collapse
Affiliation(s)
- Francesco Neri
- Buck Institute for Research on Aging, Novato, CA, USA
- USC Leonard Davis School of Gerontology, Los Angeles, CA, USA
| | | | - Chad A Lerner
- Buck Institute for Research on Aging, Novato, CA, USA
| | - Pierre-Yves Desprez
- Buck Institute for Research on Aging, Novato, CA, USA
- California Pacific Medical Center, San Francisco, CA, USA
| | - Birgit Schilling
- Buck Institute for Research on Aging, Novato, CA, USA.
- USC Leonard Davis School of Gerontology, Los Angeles, CA, USA.
| | - Judith Campisi
- Buck Institute for Research on Aging, Novato, CA, USA
- USC Leonard Davis School of Gerontology, Los Angeles, CA, USA
| | | |
Collapse
|
3
|
Marini N, Marchesin S, Wodzinski M, Caputo A, Podareanu D, Guevara BC, Boytcheva S, Vatrano S, Fraggetta F, Ciompi F, Silvello G, Müller H, Atzori M. Multimodal representations of biomedical knowledge from limited training whole slide images and reports using deep learning. Med Image Anal 2024; 97:103303. [PMID: 39154617 DOI: 10.1016/j.media.2024.103303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 08/08/2024] [Accepted: 08/09/2024] [Indexed: 08/20/2024]
Abstract
The increasing availability of biomedical data creates valuable resources for developing new deep learning algorithms to support experts, especially in domains where collecting large volumes of annotated data is not trivial. Biomedical data include several modalities containing complementary information, such as medical images and reports: images are often large and encode low-level information, while reports include a summarized high-level description of the findings identified within data and often only concerning a small part of the image. However, only a few methods allow to effectively link the visual content of images with the textual content of reports, preventing medical specialists from properly benefitting from the recent opportunities offered by deep learning models. This paper introduces a multimodal architecture creating a robust biomedical data representation encoding fine-grained text representations within image embeddings. The architecture aims to tackle data scarcity (combining supervised and self-supervised learning) and to create multimodal biomedical ontologies. The architecture is trained on over 6,000 colon whole slide Images (WSI), paired with the corresponding report, collected from two digital pathology workflows. The evaluation of the multimodal architecture involves three tasks: WSI classification (on data from pathology workflow and from public repositories), multimodal data retrieval, and linking between textual and visual concepts. Noticeably, the latter two tasks are available by architectural design without further training, showing that the multimodal architecture that can be adopted as a backbone to solve peculiar tasks. The multimodal data representation outperforms the unimodal one on the classification of colon WSIs and allows to halve the data needed to reach accurate performance, reducing the computational power required and thus the carbon footprint. The combination of images and reports exploiting self-supervised algorithms allows to mine databases without needing new annotations provided by experts, extracting new information. In particular, the multimodal visual ontology, linking semantic concepts to images, may pave the way to advancements in medicine and biomedical analysis domains, not limited to histopathology.
Collapse
Affiliation(s)
- Niccolò Marini
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland.
| | - Stefano Marchesin
- Department of Information Engineering, University of Padua, Padua, Italy.
| | - Marek Wodzinski
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland; Department of Measurement and Electronics, AGH University of Kraków, Krakow, Poland
| | - Alessandro Caputo
- Department of Pathology, Ruggi University Hospital, Salerno, Italy; Pathology Unit, Gravina Hospital Caltagirone ASP, Catania, Italy
| | | | | | - Svetla Boytcheva
- Ontotext, Sofia, Bulgaria; Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Sofia, Bulgaria
| | - Simona Vatrano
- Pathology Unit, Gravina Hospital Caltagirone ASP, Catania, Italy
| | - Filippo Fraggetta
- Pathology Unit, Gravina Hospital Caltagirone ASP, Catania, Italy; Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Gianmaria Silvello
- Department of Information Engineering, University of Padua, Padua, Italy
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland; Medical faculty, University of Geneva, 1211 Geneva, Switzerland
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland; Department of Neurosciences, University of Padua, Padua, Italy
| |
Collapse
|
4
|
Mehrabian H, Brodbeck J, Lyu P, Vaquero E, Aggarwal A, Diehl L. Leveraging immuno-fluorescence data to reduce pathologist annotation requirements in lung tumor segmentation using deep learning. Sci Rep 2024; 14:21643. [PMID: 39284813 PMCID: PMC11405770 DOI: 10.1038/s41598-024-69244-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Accepted: 08/02/2024] [Indexed: 09/22/2024] Open
Abstract
The main bottleneck in training a robust tumor segmentation algorithm for non-small cell lung cancer (NSCLC) on H&E is generating sufficient ground truth annotations. Various approaches for generating tumor labels to train a tumor segmentation model was explored. A large dataset of low-cost low-accuracy panCK-based annotations was used to pre-train the model and determine the minimum required size of the expensive but highly accurate pathologist annotations dataset. PanCK pre-training was compared to foundation models and various architectures were explored for model backbone. Proper study design and sample procurement for training a generalizable model that captured variations in NSCLC H&E was studied. H&E imaging was performed on 112 samples (three centers, two scanner types, different staining and imaging protocols). Attention U-Net architecture was trained using the large panCK-based annotations dataset (68 samples, total area 10,326 [mm2]) followed by fine-tuning using a small pathologist annotations dataset (80 samples, total area 246 [mm2]). This approach resulted in mean intersection over union (mIoU) of 82% [77 87]. Using panCK pretraining provided better performance compared to foundation models and allowed for 70% reduction in pathologist annotations with no drop in performance. Study design ensured model generalizability over variations on H&E where performance was consistent across centers, scanners, and subtypes.
Collapse
Affiliation(s)
- Hatef Mehrabian
- Non-Clinical Safety and Pathobiology, Gilead Sciences, Foster City, CA, USA.
| | - Jens Brodbeck
- Non-Clinical Safety and Pathobiology, Gilead Sciences, Foster City, CA, USA
| | - Peipei Lyu
- Non-Clinical Safety and Pathobiology, Gilead Sciences, Foster City, CA, USA
| | - Edith Vaquero
- Non-Clinical Safety and Pathobiology, Gilead Sciences, Foster City, CA, USA
| | - Abhishek Aggarwal
- Non-Clinical Safety and Pathobiology, Gilead Sciences, Foster City, CA, USA
| | - Lauri Diehl
- Non-Clinical Safety and Pathobiology, Gilead Sciences, Foster City, CA, USA
| |
Collapse
|
5
|
Aden D, Zaheer S, Khan S. Possible benefits, challenges, pitfalls, and future perspective of using ChatGPT in pathology. REVISTA ESPANOLA DE PATOLOGIA : PUBLICACION OFICIAL DE LA SOCIEDAD ESPANOLA DE ANATOMIA PATOLOGICA Y DE LA SOCIEDAD ESPANOLA DE CITOLOGIA 2024; 57:198-210. [PMID: 38971620 DOI: 10.1016/j.patol.2024.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 02/22/2024] [Accepted: 04/16/2024] [Indexed: 07/08/2024]
Abstract
The much-hyped artificial intelligence (AI) model called ChatGPT developed by Open AI can have great benefits for physicians, especially pathologists, by saving time so that they can use their time for more significant work. Generative AI is a special class of AI model, which uses patterns and structures learned from existing data and can create new data. Utilizing ChatGPT in Pathology offers a multitude of benefits, encompassing the summarization of patient records and its promising prospects in Digital Pathology, as well as its valuable contributions to education and research in this field. However, certain roadblocks need to be dealt like integrating ChatGPT with image analysis which will act as a revolution in the field of pathology by increasing diagnostic accuracy and precision. The challenges with the use of ChatGPT encompass biases from its training data, the need for ample input data, potential risks related to bias and transparency, and the potential adverse outcomes arising from inaccurate content generation. Generation of meaningful insights from the textual information which will be efficient in processing different types of image data, such as medical images, and pathology slides. Due consideration should be given to ethical and legal issues including bias.
Collapse
Affiliation(s)
- Durre Aden
- Department of Pathology, Hamdard Institute of Medical Sciences and Research, Jamia Hamdard, New Delhi, India
| | - Sufian Zaheer
- Department of Pathology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India.
| | - Sabina Khan
- Department of Pathology, Hamdard Institute of Medical Sciences and Research, Jamia Hamdard, New Delhi, India
| |
Collapse
|
6
|
Chang J, Hatfield B. Advancements in computer vision and pathology: Unraveling the potential of artificial intelligence for precision diagnosis and beyond. Adv Cancer Res 2024; 161:431-478. [PMID: 39032956 DOI: 10.1016/bs.acr.2024.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/23/2024]
Abstract
The integration of computer vision into pathology through slide digitalization represents a transformative leap in the field's evolution. Traditional pathology methods, while reliable, are often time-consuming and susceptible to intra- and interobserver variability. In contrast, computer vision, empowered by artificial intelligence (AI) and machine learning (ML), promises revolutionary changes, offering consistent, reproducible, and objective results with ever-increasing speed and scalability. The applications of advanced algorithms and deep learning architectures like CNNs and U-Nets augment pathologists' diagnostic capabilities, opening new frontiers in automated image analysis. As these technologies mature and integrate into digital pathology workflows, they are poised to provide deeper insights into disease processes, quantify and standardize biomarkers, enhance patient outcomes, and automate routine tasks, reducing pathologists' workload. However, this transformative force calls for cross-disciplinary collaboration between pathologists, computer scientists, and industry innovators to drive research and development. While acknowledging its potential, this chapter addresses the limitations of AI in pathology, encompassing technical, practical, and ethical considerations during development and implementation.
Collapse
Affiliation(s)
- Justin Chang
- Virginia Commonwealth University Health System, Richmond, VA, United States
| | - Bryce Hatfield
- Virginia Commonwealth University Health System, Richmond, VA, United States.
| |
Collapse
|
7
|
Seoni S, Shahini A, Meiburger KM, Marzola F, Rotunno G, Acharya UR, Molinari F, Salvi M. All you need is data preparation: A systematic review of image harmonization techniques in Multi-center/device studies for medical support systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108200. [PMID: 38677080 DOI: 10.1016/j.cmpb.2024.108200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 04/29/2024]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Alen Shahini
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Kristen M Meiburger
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Francesco Marzola
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Giulia Rotunno
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Centre for Health Research, University of Southern Queensland, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| |
Collapse
|
8
|
Kiyuna T, Cosatto E, Hatanaka KC, Yokose T, Tsuta K, Motoi N, Makita K, Shimizu A, Shinohara T, Suzuki A, Takakuwa E, Takakuwa Y, Tsuji T, Tsujiwaki M, Yanai M, Yuzawa S, Ogura M, Hatanaka Y. Evaluating Cellularity Estimation Methods: Comparing AI Counting with Pathologists' Visual Estimates. Diagnostics (Basel) 2024; 14:1115. [PMID: 38893641 PMCID: PMC11171606 DOI: 10.3390/diagnostics14111115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 05/17/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024] Open
Abstract
The development of next-generation sequencing (NGS) has enabled the discovery of cancer-specific driver gene alternations, making precision medicine possible. However, accurate genetic testing requires a sufficient amount of tumor cells in the specimen. The evaluation of tumor content ratio (TCR) from hematoxylin and eosin (H&E)-stained images has been found to vary between pathologists, making it an important challenge to obtain an accurate TCR. In this study, three pathologists exhaustively labeled all cells in 41 regions from 41 lung cancer cases as either tumor, non-tumor or indistinguishable, thus establishing a "gold standard" TCR. We then compared the accuracy of the TCR estimated by 13 pathologists based on visual assessment and the TCR calculated by an AI model that we have developed. It is a compact and fast model that follows a fully convolutional neural network architecture and produces cell detection maps which can be efficiently post-processed to obtain tumor and non-tumor cell counts from which TCR is calculated. Its raw cell detection accuracy is 92% while its classification accuracy is 84%. The results show that the error between the gold standard TCR and the AI calculation was significantly smaller than that between the gold standard TCR and the pathologist's visual assessment (p<0.05). Additionally, the robustness of AI models across institutions is a key issue and we demonstrate that the variation in AI was smaller than that in the average of pathologists when evaluated by institution. These findings suggest that the accuracy of tumor cellularity assessments in clinical workflows is significantly improved by the introduction of robust AI models, leading to more efficient genetic testing and ultimately to better patient outcomes.
Collapse
Affiliation(s)
- Tomoharu Kiyuna
- Healthcare Life Science Division, NEC Corporation, Tokyo 108-8556, Japan;
| | - Eric Cosatto
- Department of Machine Learning, NEC Laboratories America, Princeton, NJ 08540, USA;
| | - Kanako C. Hatanaka
- Center for Development of Advanced Diagnostics (C-DAD), Hokkaido University Hospital, Sapporo 060-8648, Japan;
| | - Tomoyuki Yokose
- Department of Pathology, Kanagawa Cancer Center, Yokohama 241-8515, Japan;
| | - Koji Tsuta
- Department of Pathology, Kansai Medical University, Osaka 573-1010, Japan;
| | - Noriko Motoi
- Department of Pathology, Saitama Cancer Center, Saitama 362-0806, Japan;
| | - Keishi Makita
- Department of Pathology, Oji General Hospital, Tomakomai 053-8506, Japan
| | - Ai Shimizu
- Department of Surgical Pathology, Hokkaido University Hospital, Sapporo 060-8648, Japan; (A.S.); (E.T.)
| | - Toshiya Shinohara
- Department of Pathology, Teine Keijinkai Hospital, Sapporo 006-0811, Japan
| | - Akira Suzuki
- Department of Pathology, KKR Sapporo Medical Center, Sapporo 062-0931, Japan
| | - Emi Takakuwa
- Department of Surgical Pathology, Hokkaido University Hospital, Sapporo 060-8648, Japan; (A.S.); (E.T.)
| | - Yasunari Takakuwa
- Department of Pathology, NTT Medical Center Sapporo, Sapporo 060-0061, Japan;
| | - Takahiro Tsuji
- Department of Pathology, Sapporo City General Hospital, Sapporo 060-8604, Japan;
| | - Mitsuhiro Tsujiwaki
- Department of Surgical Pathology, Sapporo Medical University Hospital, Sapporo 060-8543, Japan;
| | - Mitsuru Yanai
- Department of Pathology, Sapporo Tokushukai Hospital, Sapporo 004-0041, Japan;
| | - Sayaka Yuzawa
- Department of Diagnostic Pathology, Asahikawa Medical University Hospital, Asahikawa 078-8510, Japan
| | - Maki Ogura
- Healthcare Life Science Division, NEC Corporation, Tokyo 108-8556, Japan;
| | - Yutaka Hatanaka
- Center for Development of Advanced Diagnostics (C-DAD), Hokkaido University Hospital, Sapporo 060-8648, Japan;
| |
Collapse
|
9
|
Jin L, Tang Y, Coole JB, Tan MT, Zhao X, Badaoui H, Robinson JT, Williams MD, Vigneswaran N, Gillenwater AM, Richards-Kortum RR, Veeraraghavan A. DeepDOF-SE: affordable deep-learning microscopy platform for slide-free histology. Nat Commun 2024; 15:2935. [PMID: 38580633 PMCID: PMC10997797 DOI: 10.1038/s41467-024-47065-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 03/19/2024] [Indexed: 04/07/2024] Open
Abstract
Histopathology plays a critical role in the diagnosis and surgical management of cancer. However, access to histopathology services, especially frozen section pathology during surgery, is limited in resource-constrained settings because preparing slides from resected tissue is time-consuming, labor-intensive, and requires expensive infrastructure. Here, we report a deep-learning-enabled microscope, named DeepDOF-SE, to rapidly scan intact tissue at cellular resolution without the need for physical sectioning. Three key features jointly make DeepDOF-SE practical. First, tissue specimens are stained directly with inexpensive vital fluorescent dyes and optically sectioned with ultra-violet excitation that localizes fluorescent emission to a thin surface layer. Second, a deep-learning algorithm extends the depth-of-field, allowing rapid acquisition of in-focus images from large areas of tissue even when the tissue surface is highly irregular. Finally, a semi-supervised generative adversarial network virtually stains DeepDOF-SE fluorescence images with hematoxylin-and-eosin appearance, facilitating image interpretation by pathologists without significant additional training. We developed the DeepDOF-SE platform using a data-driven approach and validated its performance by imaging surgical resections of suspected oral tumors. Our results show that DeepDOF-SE provides histological information of diagnostic importance, offering a rapid and affordable slide-free histology platform for intraoperative tumor margin assessment and in low-resource settings.
Collapse
Affiliation(s)
- Lingbo Jin
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Yubo Tang
- Department of Bioengineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Jackson B Coole
- Department of Bioengineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Melody T Tan
- Department of Bioengineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Xuan Zhao
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Hawraa Badaoui
- Department of Head and Neck Surgery, University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, USA
| | - Jacob T Robinson
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Michelle D Williams
- Department of Pathology, University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, USA
| | - Nadarajah Vigneswaran
- Department of Diagnostic and Biomedical Sciences, University of Texas Health Science Center at Houston School of Dentistry, 7500 Cambridge St, Houston, TX, USA
| | - Ann M Gillenwater
- Department of Head and Neck Surgery, University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, USA
| | | | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA.
| |
Collapse
|
10
|
Ochi M, Komura D, Onoyama T, Shinbo K, Endo H, Odaka H, Kakiuchi M, Katoh H, Ushiku T, Ishikawa S. Registered multi-device/staining histology image dataset for domain-agnostic machine learning models. Sci Data 2024; 11:330. [PMID: 38570515 PMCID: PMC10991301 DOI: 10.1038/s41597-024-03122-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 03/04/2024] [Indexed: 04/05/2024] Open
Abstract
Variations in color and texture of histopathology images are caused by differences in staining conditions and imaging devices between hospitals. These biases decrease the robustness of machine learning models exposed to out-of-domain data. To address this issue, we introduce a comprehensive histopathology image dataset named PathoLogy Images of Scanners and Mobile phones (PLISM). The dataset consisted of 46 human tissue types stained using 13 hematoxylin and eosin conditions and captured using 13 imaging devices. Precisely aligned image patches from different domains allowed for an accurate evaluation of color and texture properties in each domain. Variation in PLISM was assessed and found to be significantly diverse across various domains, particularly between whole-slide images and smartphones. Furthermore, we assessed the improvement in domain shift using a convolutional neural network pre-trained on PLISM. PLISM is a valuable resource that facilitates the precise evaluation of domain shifts in digital pathology and makes significant contributions towards the development of robust machine learning models that can effectively address challenges of domain shift in histological image analysis.
Collapse
Affiliation(s)
- Mieko Ochi
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Daisuke Komura
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan.
| | - Takumi Onoyama
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
- Division of Gastroenterology and Nephrology, Department of Multidisciplinary Internal Medicine, School of Medicine, Faculty of Medicine, Tottori University, 36-1 Nishicho, Yonago, Tottori, 683-8504, Japan
| | - Koki Shinbo
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Haruya Endo
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Hiroto Odaka
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Miwako Kakiuchi
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Hiroto Katoh
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Tetsuo Ushiku
- Department of Pathology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Shumpei Ishikawa
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan.
- Division of Pathology, National Cancer Center Exploratory Oncology Research & Clinical Trial Center, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
| |
Collapse
|
11
|
Neri F, Takajjart SN, Lerner CA, Desprez PY, Schilling B, Campisi J, Gerencser AA. A Fully-Automated Senescence Test (FAST) for the high-throughput quantification of senescence-associated markers. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.22.573123. [PMID: 38187756 PMCID: PMC10769423 DOI: 10.1101/2023.12.22.573123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
Cellular senescence is a major driver of aging and age-related diseases. Quantification of senescent cells remains challenging due to the lack of senescence-specific markers and generalist, unbiased methodology. Here, we describe the Fully-Automated Senescence Test (FAST), an image-based method for the high-throughput, single-cell assessment of senescence in cultured cells. FAST quantifies three of the most widely adopted senescence-associated markers for each cell imaged: senescence-associated β-galactosidase activity (SA-β-Gal) using X-Gal, proliferation arrest via lack of 5-ethynyl-2'-deoxyuridine (EdU) incorporation, and enlarged morphology via increased nuclear area. The presented workflow entails microplate image acquisition, image processing, data analysis, and graphing. Standardization was achieved by i) quantifying colorimetric SA-β-Gal via optical density; ii) implementing staining background controls; iii) automating image acquisition, image processing, and data analysis. In addition to the automated threshold-based scoring, a multivariate machine learning approach is provided. We show that FAST accurately quantifies senescence burden and is agnostic to cell type and microscope setup. Moreover, it effectively mitigates false-positive senescence marker staining, a common issue arising from culturing conditions. Using FAST, we compared X-Gal with fluorescent C12FDG live-cell SA-β-Gal staining on the single-cell level. We observed only a modest correlation between the two, indicating that those stains are not trivially interchangeable. Finally, we provide proof of concept that our method is suitable for screening compounds that modify senescence burden. This method will be broadly useful to the aging field by enabling rapid, unbiased, and user-friendly quantification of senescence burden in culture, as well as facilitating large-scale experiments that were previously impractical.
Collapse
Affiliation(s)
- Francesco Neri
- Buck Institute for Research on Aging, Novato, CA, USA
- USC Leonard Davis School of Gerontology, Los Angeles, CA, USA
| | | | | | - Pierre-Yves Desprez
- Buck Institute for Research on Aging, Novato, CA, USA
- California Pacific Medical Center, San Francisco, CA, USA
| | - Birgit Schilling
- Buck Institute for Research on Aging, Novato, CA, USA
- USC Leonard Davis School of Gerontology, Los Angeles, CA, USA
| | - Judith Campisi
- Buck Institute for Research on Aging, Novato, CA, USA
- USC Leonard Davis School of Gerontology, Los Angeles, CA, USA
| | | |
Collapse
|
12
|
Küttel D, Kovács L, Szölgyén Á, Paulik R, Jónás V, Kozlovszky M, Molnár B. Artifact Augmentation for Enhanced Tissue Detection in Microscope Scanner Systems. SENSORS (BASEL, SWITZERLAND) 2023; 23:9243. [PMID: 38005629 PMCID: PMC10675542 DOI: 10.3390/s23229243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 11/10/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023]
Abstract
As the field of routine pathology transitions into the digital realm, there is a surging demand for the full automation of microscope scanners, aiming to expedite the process of digitizing tissue samples, and consequently, enhancing the efficiency of case diagnoses. The key to achieving seamless automatic imaging lies in the precise detection and segmentation of tissue sample regions on the glass slides. State-of-the-art approaches for this task lean heavily on deep learning techniques, particularly U-Net convolutional neural networks. However, since samples can be highly diverse and prepared in various ways, it is almost impossible to be fully prepared for and cover every scenario with training data. We propose a data augmentation step that allows artificially modifying the training data by extending some artifact features of the available data to the rest of the dataset. This procedure can be used to generate images that can be considered synthetic. These artifacts could include felt pen markings, speckles of dirt, residual bubbles in covering glue, or stains. The proposed approach achieved a 1-6% improvement for these samples according to the F1 Score metric.
Collapse
Affiliation(s)
- Dániel Küttel
- Image Analysis Department, 3DHISTECH Ltd., 1141 Budapest, Hungary
- John von Neumann Faculty of Informatics, Óbuda University, 1034 Budapest, Hungary
| | - László Kovács
- Image Analysis Department, 3DHISTECH Ltd., 1141 Budapest, Hungary
| | - Ákos Szölgyén
- Image Analysis Department, 3DHISTECH Ltd., 1141 Budapest, Hungary
| | - Róbert Paulik
- Image Analysis Department, 3DHISTECH Ltd., 1141 Budapest, Hungary
| | - Viktor Jónás
- Image Analysis Department, 3DHISTECH Ltd., 1141 Budapest, Hungary
| | - Miklós Kozlovszky
- John von Neumann Faculty of Informatics, Óbuda University, 1034 Budapest, Hungary
- Medical Device Research Group, LPDS, Institute for Computer Science and Control, Hungarian Academy of Sciences (SZTAKI), 1111 Budapest, Hungary
| | - Béla Molnár
- Image Analysis Department, 3DHISTECH Ltd., 1141 Budapest, Hungary
- 2nd Department of Internal Medicine, Semmelweis University, 1088 Budapest, Hungary
| |
Collapse
|
13
|
Gabriel JA, D’Amico C, Kosgodage U, Satoc J, Haine N, Willis S, Orchard GE. Evaluation of a New Mordant Based Haematoxylin Dye (Haematoxylin X) for Use in Clinical Pathology. Br J Biomed Sci 2023; 80:11591. [PMID: 37818105 PMCID: PMC10560741 DOI: 10.3389/bjbs.2023.11591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 09/11/2023] [Indexed: 10/12/2023]
Abstract
Recently, St John's Dermatopathology Laboratory and CellPath Ltd have developed a new patented haematoxylin dye (Haematoxylin X) that utilises a chromium-based mordant (Chromium Sulphate). In this study, the performance of this new haematoxylin (Haematoxylin X) was compared against some commonly utilised alum-based haematoxylins (Carazzi's, Harris' and Mayer's) when used as a part of formalin-fixed paraffin embedded (FFPE) tissue, special stains, immunohistochemical counterstaining and frozen section (Mohs procedure) staining procedures. FFPE sections of different tissue types and frozen skin tissues were sectioned and stained with each haematoxylin subtype to allow for a direct comparison of staining quality. The slides were independently evaluated microscopically by two assessors. A combined score was generated to determine the sensitivity (defined as the intensity of haematoxylin staining being too weak or too strong and the colour of the haematoxylin staining not being blue/black) and specificity (defined as the presence of haematoxylin background staining, uneven staining, and staining deposits) for each of the four haematoxylin subtypes. The scoring criteria were based on the UKNEQAS Cellular pathology techniques assessment criteria. In FFPE tissue, the results for specificity identified Harris haematoxylin scoring the highest (91.2%) followed by Haematoxylin X (88.0%) and Mayer's (87.0%). The sensitivity scores again identified Harris haematoxylin as scoring the highest (95.1%) followed by Haematoxylin X (90.0%) and Mayer's (88.0%). In frozen tissue, the results for specificity identified Haematoxylin X as scoring the highest (85.5%) followed by Carazzi's (80.7%) and Harris' (77.4%). The sensitivity scores again identified Haematoxylin X as scoring the highest (86.8%) followed by Carazzi's (82.0%) and Harris' (81.0%). The results achieved with all four haematoxylins showed a high degree of comparability, with Harris' haematoxylin scoring high scores overall compared to the other four when assessing FFPE sections. This may have been due to familiarity with the use of Harris' haematoxylin in-house. There was also evidence of more pronounced staining of extracellular mucin proteins with Haematoxylin X compared to the other alum haematoxylins that were assessed. Haematoxylin X scored highest when used in frozen section staining. In addition, Haematoxylin X has a potential applications for use in IHC and special stains procedures as a counterstain.
Collapse
Affiliation(s)
- J. A. Gabriel
- St. John’s Dermatopathology, Tissue Sciences, Synnovis Analytics, St. Thomas’ Hospital, London, United Kingdom
| | - C. D’Amico
- St. John’s Dermatopathology, Tissue Sciences, Synnovis Analytics, St. Thomas’ Hospital, London, United Kingdom
| | - U. Kosgodage
- St. John’s Dermatopathology, Tissue Sciences, Synnovis Analytics, St. Thomas’ Hospital, London, United Kingdom
| | - J. Satoc
- St. John’s Dermatopathology, Tissue Sciences, Synnovis Analytics, St. Thomas’ Hospital, London, United Kingdom
| | - N. Haine
- CellPath Ltd, Powys, United Kingdom
| | | | - G. E. Orchard
- St. John’s Dermatopathology, Tissue Sciences, Synnovis Analytics, St. Thomas’ Hospital, London, United Kingdom
| |
Collapse
|
14
|
Xu M, Ouyang Y, Yuan Z. Deep Learning Aided Neuroimaging and Brain Regulation. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23114993. [PMID: 37299724 DOI: 10.3390/s23114993] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/15/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
Currently, deep learning aided medical imaging is becoming the hot spot of AI frontier application and the future development trend of precision neuroscience. This review aimed to render comprehensive and informative insights into the recent progress of deep learning and its applications in medical imaging for brain monitoring and regulation. The article starts by providing an overview of the current methods for brain imaging, highlighting their limitations and introducing the potential benefits of using deep learning techniques to overcome these limitations. Then, we further delve into the details of deep learning, explaining the basic concepts and providing examples of how it can be used in medical imaging. One of the key strengths is its thorough discussion of the different types of deep learning models that can be used in medical imaging including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial network (GAN) assisted magnetic resonance imaging (MRI), positron emission tomography (PET)/computed tomography (CT), electroencephalography (EEG)/magnetoencephalography (MEG), optical imaging, and other imaging modalities. Overall, our review on deep learning aided medical imaging for brain monitoring and regulation provides a referrable glance for the intersection of deep learning aided neuroimaging and brain regulation.
Collapse
Affiliation(s)
- Mengze Xu
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai 519087, China
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| | - Yuanyuan Ouyang
- Nanomicro Sino-Europe Technology Company Limited, Zhuhai 519031, China
- Jiangfeng China-Portugal Technology Co., Ltd., Macau SAR 999078, China
| | - Zhen Yuan
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| |
Collapse
|
15
|
Fogarty R, Goldgof D, Hall L, Lopez A, Johnson J, Gadara M, Stoyanova R, Punnen S, Pollack A, Pow-Sang J, Balagurunathan Y. Classifying Malignancy in Prostate Glandular Structures from Biopsy Scans with Deep Learning. Cancers (Basel) 2023; 15:cancers15082335. [PMID: 37190264 DOI: 10.3390/cancers15082335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/07/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
Histopathological classification in prostate cancer remains a challenge with high dependence on the expert practitioner. We develop a deep learning (DL) model to identify the most prominent Gleason pattern in a highly curated data cohort and validate it on an independent dataset. The histology images are partitioned in tiles (14,509) and are curated by an expert to identify individual glandular structures with assigned primary Gleason pattern grades. We use transfer learning and fine-tuning approaches to compare several deep neural network architectures that are trained on a corpus of camera images (ImageNet) and tuned with histology examples to be context appropriate for histopathological discrimination with small samples. In our study, the best DL network is able to discriminate cancer grade (GS3/4) from benign with an accuracy of 91%, F1-score of 0.91 and AUC 0.96 in a baseline test (52 patients), while the cancer grade discrimination of the GS3 from GS4 had an accuracy of 68% and AUC of 0.71 (40 patients).
Collapse
Affiliation(s)
- Ryan Fogarty
- Department of Machine Learning, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Lawrence Hall
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Alex Lopez
- Tissue Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Joseph Johnson
- Analytic Microscopy Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Manoj Gadara
- Anatomic Pathology Division, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Quest Diagnostics, Tampa, FL 33612, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Sanoj Punnen
- Desai Sethi Urology Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Alan Pollack
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Julio Pow-Sang
- Genitourinary Cancers, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | | |
Collapse
|
16
|
Marrón-Esquivel JM, Duran-Lopez L, Linares-Barranco A, Dominguez-Morales JP. A comparative study of the inter-observer variability on Gleason grading against Deep Learning-based approaches for prostate cancer. Comput Biol Med 2023; 159:106856. [PMID: 37075600 DOI: 10.1016/j.compbiomed.2023.106856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 02/07/2023] [Accepted: 03/30/2023] [Indexed: 04/08/2023]
Abstract
BACKGROUND Among all the cancers known today, prostate cancer is one of the most commonly diagnosed in men. With modern advances in medicine, its mortality has been considerably reduced. However, it is still a leading type of cancer in terms of deaths. The diagnosis of prostate cancer is mainly conducted by biopsy test. From this test, Whole Slide Images are obtained, from which pathologists diagnose the cancer according to the Gleason scale. Within this scale from 1 to 5, grade 3 and above is considered malignant tissue. Several studies have shown an inter-observer discrepancy between pathologists in assigning the value of the Gleason scale. Due to the recent advances in artificial intelligence, its application to the computational pathology field with the aim of supporting and providing a second opinion to the professional is of great interest. METHOD In this work, the inter-observer variability of a local dataset of 80 whole-slide images annotated by a team of 5 pathologists from the same group was analyzed at both area and label level. Four approaches were followed to train six different Convolutional Neural Network architectures, which were evaluated on the same dataset on which the inter-observer variability was analyzed. RESULTS An inter-observer variability of 0.6946 κ was obtained, with 46% discrepancy in terms of area size of the annotations performed by the pathologists. The best trained models achieved 0.826±0.014κ on the test set when trained with data from the same source. CONCLUSIONS The obtained results show that deep learning-based automatic diagnosis systems could help reduce the widely-known inter-observer variability that is present among pathologists and support them in their decision, serving as a second opinion or as a triage tool for medical centers.
Collapse
|