1
|
Tafavvoghi M, Bongo LA, Shvetsov N, Busund LTR, Møllersen K. Publicly available datasets of breast histopathology H&E whole-slide images: A scoping review. J Pathol Inform 2024; 15:100363. [PMID: 38405160 PMCID: PMC10884505 DOI: 10.1016/j.jpi.2024.100363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/24/2023] [Accepted: 01/23/2024] [Indexed: 02/27/2024] Open
Abstract
Advancements in digital pathology and computing resources have made a significant impact in the field of computational pathology for breast cancer diagnosis and treatment. However, access to high-quality labeled histopathological images of breast cancer is a big challenge that limits the development of accurate and robust deep learning models. In this scoping review, we identified the publicly available datasets of breast H&E-stained whole-slide images (WSIs) that can be used to develop deep learning algorithms. We systematically searched 9 scientific literature databases and 9 research data repositories and found 17 publicly available datasets containing 10 385 H&E WSIs of breast cancer. Moreover, we reported image metadata and characteristics for each dataset to assist researchers in selecting proper datasets for specific tasks in breast cancer computational pathology. In addition, we compiled 2 lists of breast H&E patches and private datasets as supplementary resources for researchers. Notably, only 28% of the included articles utilized multiple datasets, and only 14% used an external validation set, suggesting that the performance of other developed models may be susceptible to overestimation. The TCGA-BRCA was used in 52% of the selected studies. This dataset has a considerable selection bias that can impact the robustness and generalizability of the trained algorithms. There is also a lack of consistent metadata reporting of breast WSI datasets that can be an issue in developing accurate deep learning models, indicating the necessity of establishing explicit guidelines for documenting breast WSI dataset characteristics and metadata.
Collapse
Affiliation(s)
- Masoud Tafavvoghi
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| | - Lars Ailo Bongo
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | - Nikita Shvetsov
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | | | - Kajsa Møllersen
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| |
Collapse
|
2
|
Bianco V, Valentino M, Pirone D, Miccio L, Memmolo P, Brancato V, Coppola L, Smaldone G, D’Aiuto M, Mossetti G, Salvatore M, Ferraro P. Classifying breast cancer and fibroadenoma tissue biopsies from paraffined stain-free slides by fractal biomarkers in Fourier Ptychographic Microscopy. Comput Struct Biotechnol J 2024; 24:225-236. [PMID: 38572166 PMCID: PMC10990711 DOI: 10.1016/j.csbj.2024.03.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 03/21/2024] [Accepted: 03/21/2024] [Indexed: 04/05/2024] Open
Abstract
Breast cancer is one of the most spread and monitored pathologies in high-income countries. After breast biopsy, histological tissue is stored in paraffin, sectioned and mounted. Conventional inspection of tissue slides under benchtop light microscopes involves paraffin removal and staining, typically with H&E. Then, expert pathologists are called to judge the stained slides. However, paraffin removal and staining are operator-dependent, time and resources consuming processes that can generate ambiguities due to non-uniform staining. Here we propose a novel method that can work directly on paraffined stain-free slides. We use Fourier Ptychography as a quantitative phase-contrast microscopy method, which allows accessing a very wide field of view (i.e., mm2) in one single image while guaranteeing high lateral resolution (i.e., 0.5 µm). This imaging method is multi-scale, since it enables looking at the big picture, i.e. the complex tissue structure and connections, with the possibility to zoom-in up to the single-cell level. To handle this informative image content, we introduce elements of fractal geometry as multi-scale analysis method. We show the effectiveness of fractal features in describing and classifying fibroadenoma and breast cancer tissue slides from ten patients with very high accuracy. We reach 94.0 ± 4.2% test accuracy in classifying single images. Above all, we show that combining the decisions of the single images, each patient's slide can be classified with no error. Besides, fractal geometry returns a guide map to help pathologist to judge the different tissue portions based on the likelihood these can be associated to a breast cancer or fibroadenoma biomarker. The proposed automatic method could significantly simplify the steps of tissue analysis and make it independent from the sample preparation, the skills of the lab operator and the pathologist.
Collapse
Affiliation(s)
- Vittorio Bianco
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello”, Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Marika Valentino
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello”, Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
- DIETI, Department of Electrical Engineering and Information Technologies, University of Naples “Federico II”, via Claudio 21, 80125 Napoli, Italy
| | - Daniele Pirone
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello”, Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Lisa Miccio
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello”, Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Pasquale Memmolo
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello”, Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | | | - Luigi Coppola
- IRCCS SYNLAB SDN, Via E. Gianturco 113, Napoli 80143, Italy
| | | | | | - Gennaro Mossetti
- Pathological Anatomy Service, Casa di Cura Maria Rosaria, Via Colle San Bartolomeo 50, 80045 Pompei, Napoli, Italy
| | | | - Pietro Ferraro
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello”, Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| |
Collapse
|
3
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
4
|
Glass M, Ji Z, Davis R, Pavlisko EN, DiBernardo L, Carney J, Fishbein G, Luthringer D, Miller D, Mitchell R, Larsen B, Butt Y, Bois M, Maleszewski J, Halushka M, Seidman M, Lin CY, Buja M, Stone J, Dov D, Carin L, Glass C. A machine learning algorithm improves the diagnostic accuracy of the histologic component of antibody mediated rejection (AMR-H) in cardiac transplant endomyocardial biopsies. Cardiovasc Pathol 2024; 72:107646. [PMID: 38677634 DOI: 10.1016/j.carpath.2024.107646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 04/15/2024] [Accepted: 04/16/2024] [Indexed: 04/29/2024] Open
Abstract
BACKGROUND Pathologic antibody mediated rejection (pAMR) remains a major driver of graft failure in cardiac transplant patients. The endomyocardial biopsy remains the primary diagnostic tool but presents with challenges, particularly in distinguishing the histologic component (pAMR-H) defined by 1) intravascular macrophage accumulation in capillaries and 2) activated endothelial cells that expand the cytoplasm to narrow or occlude the vascular lumen. Frequently, pAMR-H is difficult to distinguish from acute cellular rejection (ACR) and healing injury. With the advent of digital slide scanning and advances in machine deep learning, artificial intelligence technology is widely under investigation in the areas of oncologic pathology, but in its infancy in transplant pathology. For the first time, we determined if a machine learning algorithm could distinguish pAMR-H from normal myocardium, healing injury and ACR. MATERIALS AND METHODS A total of 4,212 annotations (1,053 regions of normal, 1,053 pAMR-H, 1,053 healing injury and 1,053 ACR) were completed from 300 hematoxylin and eosin slides scanned using a Leica Aperio GT450 digital whole slide scanner at 40X magnification. All regions of pAMR-H were annotated from patients confirmed with a previous diagnosis of pAMR2 (>50% positive C4d immunofluorescence and/or >10% CD68 positive intravascular macrophages). Annotations were imported into a Python 3.7 development environment using the OpenSlide™ package and a convolutional neural network approach utilizing transfer learning was performed. RESULTS The machine learning algorithm showed 98% overall validation accuracy and pAMR-H was correctly distinguished from specific categories with the following accuracies: normal myocardium (99.2%), healing injury (99.5%) and ACR (99.5%). CONCLUSION Our novel deep learning algorithm can reach acceptable, and possibly surpass, performance of current diagnostic standards of identifying pAMR-H. Such a tool may serve as an adjunct diagnostic aid for improving the pathologist's accuracy and reproducibility, especially in difficult cases with high inter-observer variability. This is one of the first studies that provides evidence that an artificial intelligence machine learning algorithm can be trained and validated to diagnose pAMR-H in cardiac transplant patients. Ongoing studies include multi-institutional verification testing to ensure generalizability.
Collapse
Affiliation(s)
- Matthew Glass
- Duke Division of Artificial Intelligence and Computational Pathology, Duke University Medical Center, Durham NC, USA; Department of Anesthesiology, Duke University Medical Center, Durham NC, USA
| | - Zhicheng Ji
- Department of Biostatistics and Bioinformatics, Duke School of Medicine, Durham NC, USA
| | - Richard Davis
- Department of Pathology, Duke University Medical Center, Durham NC, USA
| | - Elizabeth N Pavlisko
- Duke Division of Artificial Intelligence and Computational Pathology, Duke University Medical Center, Durham NC, USA; Department of Pathology, Duke University Medical Center, Durham NC, USA
| | - Louis DiBernardo
- Department of Pathology, Duke University Medical Center, Durham NC, USA
| | - John Carney
- Department of Pathology, Duke University Medical Center, Durham NC, USA
| | - Gregory Fishbein
- Department of Pathology, University of California at Los Angeles, Los Angeles CA, USA
| | - Daniel Luthringer
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles CA, USA
| | - Dylan Miller
- Department of Pathology, Intermountain Healthcare, Salt Lake City UT, USA
| | - Richard Mitchell
- Department of Pathology, Brigham and Women's Hospital, Boston MA, USA
| | - Brandon Larsen
- Department of Pathology and Laboratory Medicine, Mayo Clinic, Phoenix AZ, USA
| | - Yasmeen Butt
- Department of Pathology and Laboratory Medicine, Mayo Clinic, Phoenix AZ, USA
| | - Melanie Bois
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester MN, USA
| | - Joseph Maleszewski
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester MN, USA
| | - Marc Halushka
- Department of Pathology, Johns Hopkins University School of Medicine, Baltimore MD, USA
| | - Michael Seidman
- Department of Pathology, University Health Network, Toronto ON, CA
| | - Chieh-Yu Lin
- Department of Pathology and Immunology, Washington University, St. Louis MO, USA
| | - Maximilian Buja
- Department of Pathology and Laboratory Medicine, The University of Texas Health Science Center at Houston, Houston TX, USA
| | - James Stone
- Department of Pathology, Massachusetts General Hospital, Boston MA, USA
| | - David Dov
- Duke Division of Artificial Intelligence and Computational Pathology, Duke University Medical Center, Durham NC, USA; Pratt School of Engineering, Department of Electrical and Computer Engineering, Duke University, Durham NC, USA
| | - Lawrence Carin
- Duke Division of Artificial Intelligence and Computational Pathology, Duke University Medical Center, Durham NC, USA; Pratt School of Engineering, Department of Electrical and Computer Engineering, Duke University, Durham NC, USA
| | - Carolyn Glass
- Duke Division of Artificial Intelligence and Computational Pathology, Duke University Medical Center, Durham NC, USA; Department of Pathology, Duke University Medical Center, Durham NC, USA.
| |
Collapse
|
5
|
Saqi A, Liu Y, Politis MG, Salvatore M, Jambawalikar S. Combined expert-in-the-loop-random forest multiclass segmentation U-net based artificial intelligence model: evaluation of non-small cell lung cancer in fibrotic and non-fibrotic microenvironments. J Transl Med 2024; 22:640. [PMID: 38978066 PMCID: PMC11232199 DOI: 10.1186/s12967-024-05394-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 06/12/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND The tumor microenvironment (TME) plays a key role in lung cancer initiation, proliferation, invasion, and metastasis. Artificial intelligence (AI) methods could potentially accelerate TME analysis. The aims of this study were to (1) assess the feasibility of using hematoxylin and eosin (H&E)-stained whole slide images (WSI) to develop an AI model for evaluating the TME and (2) to characterize the TME of adenocarcinoma (ADCA) and squamous cell carcinoma (SCCA) in fibrotic and non-fibrotic lung. METHODS The cohort was derived from chest CT scans of patients presenting with lung neoplasms, with and without background fibrosis. WSI images were generated from slides of all 76 available pathology cases with ADCA (n = 53) or SCCA (n = 23) in fibrotic (n = 47) or non-fibrotic (n = 29) lung. Detailed ground-truth annotations, including of stroma (i.e., fibrosis, vessels, inflammation), necrosis and background, were performed on WSI and optimized via an expert-in-the-loop (EITL) iterative procedure using a lightweight [random forest (RF)] classifier. A convolution neural network (CNN)-based model was used to achieve tissue-level multiclass segmentation. The model was trained on 25 annotated WSI from 13 cases of ADCA and SCCA within and without fibrosis and then applied to the 76-case cohort. The TME analysis included tumor stroma ratio (TSR), tumor fibrosis ratio (TFR), tumor inflammation ratio (TIR), tumor vessel ratio (TVR), tumor necrosis ratio (TNR), and tumor background ratio (TBR). RESULTS The model's overall classification for precision, sensitivity, and F1-score were 94%, 90%, and 91%, respectively. Statistically significant differences were noted in TSR (p = 0.041) and TFR (p = 0.001) between fibrotic and non-fibrotic ADCA. Within fibrotic lung, statistically significant differences were present in TFR (p = 0.039), TIR (p = 0.003), TVR (p = 0.041), TNR (p = 0.0003), and TBR (p = 0.020) between ADCA and SCCA. CONCLUSION The combined EITL-RF CNN model using only H&E WSI can facilitate multiclass evaluation and quantification of the TME. There are significant differences in the TME of ADCA and SCCA present within or without background fibrosis. Future studies are needed to determine the significance of TME on prognosis and treatment.
Collapse
Affiliation(s)
- Anjali Saqi
- Department of Pathology and Cell Biology, Columbia University Irving Medical Center, 630 West 168th Street, New York, NY, VC14-215, 10032, USA.
| | - Yucheng Liu
- Department of Radiation Physics, Atlantic Health System, New Jersey, NJ, USA
| | - Michelle Garlin Politis
- Department of Pathology and Cell Biology, Columbia University Irving Medical Center, 630 West 168th Street, New York, NY, VC14-215, 10032, USA
| | - Mary Salvatore
- Department of Radiology, Columbia University Irving Medical Center, New York, NY, USA
| | - Sachin Jambawalikar
- Department of Radiology, Columbia University Irving Medical Center, New York, NY, USA
| |
Collapse
|
6
|
Lorenzo G, Ahmed SR, Hormuth DA, Vaughn B, Kalpathy-Cramer J, Solorio L, Yankeelov TE, Gomez H. Patient-Specific, Mechanistic Models of Tumor Growth Incorporating Artificial Intelligence and Big Data. Annu Rev Biomed Eng 2024; 26:529-560. [PMID: 38594947 DOI: 10.1146/annurev-bioeng-081623-025834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
Despite the remarkable advances in cancer diagnosis, treatment, and management over the past decade, malignant tumors remain a major public health problem. Further progress in combating cancer may be enabled by personalizing the delivery of therapies according to the predicted response for each individual patient. The design of personalized therapies requires the integration of patient-specific information with an appropriate mathematical model of tumor response. A fundamental barrier to realizing this paradigm is the current lack of a rigorous yet practical mathematical theory of tumor initiation, development, invasion, and response to therapy. We begin this review with an overview of different approaches to modeling tumor growth and treatment, including mechanistic as well as data-driven models based on big data and artificial intelligence. We then present illustrative examples of mathematical models manifesting their utility and discuss the limitations of stand-alone mechanistic and data-driven models. We then discuss the potential of mechanistic models for not only predicting but also optimizing response to therapy on a patient-specific basis. We describe current efforts and future possibilities to integrate mechanistic and data-driven models. We conclude by proposing five fundamental challenges that must be addressed to fully realize personalized care for cancer patients driven by computational models.
Collapse
Affiliation(s)
- Guillermo Lorenzo
- Oden Institute for Computational Engineering and Sciences, University of Texas, Austin, Texas, USA
- Department of Civil Engineering and Architecture, University of Pavia, Pavia, Italy
| | - Syed Rakin Ahmed
- Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire, USA
- Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Harvard Graduate Program in Biophysics, Harvard Medical School, Harvard University, Cambridge, Massachusetts, USA
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - David A Hormuth
- Livestrong Cancer Institutes, University of Texas, Austin, Texas, USA
- Oden Institute for Computational Engineering and Sciences, University of Texas, Austin, Texas, USA
| | - Brenna Vaughn
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana, USA;
| | | | - Luis Solorio
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana, USA;
| | - Thomas E Yankeelov
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, Texas, USA
- Department of Biomedical Engineering, Department of Oncology, and Department of Diagnostic Medicine, University of Texas, Austin, Texas, USA
- Livestrong Cancer Institutes, University of Texas, Austin, Texas, USA
- Oden Institute for Computational Engineering and Sciences, University of Texas, Austin, Texas, USA
| | - Hector Gomez
- School of Mechanical Engineering and Purdue Center for Cancer Research, Purdue University, West Lafayette, Indiana, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana, USA;
| |
Collapse
|
7
|
Chang J, Hatfield B. Advancements in computer vision and pathology: Unraveling the potential of artificial intelligence for precision diagnosis and beyond. Adv Cancer Res 2024; 161:431-478. [PMID: 39032956 DOI: 10.1016/bs.acr.2024.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/23/2024]
Abstract
The integration of computer vision into pathology through slide digitalization represents a transformative leap in the field's evolution. Traditional pathology methods, while reliable, are often time-consuming and susceptible to intra- and interobserver variability. In contrast, computer vision, empowered by artificial intelligence (AI) and machine learning (ML), promises revolutionary changes, offering consistent, reproducible, and objective results with ever-increasing speed and scalability. The applications of advanced algorithms and deep learning architectures like CNNs and U-Nets augment pathologists' diagnostic capabilities, opening new frontiers in automated image analysis. As these technologies mature and integrate into digital pathology workflows, they are poised to provide deeper insights into disease processes, quantify and standardize biomarkers, enhance patient outcomes, and automate routine tasks, reducing pathologists' workload. However, this transformative force calls for cross-disciplinary collaboration between pathologists, computer scientists, and industry innovators to drive research and development. While acknowledging its potential, this chapter addresses the limitations of AI in pathology, encompassing technical, practical, and ethical considerations during development and implementation.
Collapse
Affiliation(s)
- Justin Chang
- Virginia Commonwealth University Health System, Richmond, VA, United States
| | - Bryce Hatfield
- Virginia Commonwealth University Health System, Richmond, VA, United States.
| |
Collapse
|
8
|
Williams DKA, Graifman G, Hussain N, Amiel M, Tran P, Reddy A, Haider A, Kavitesh BK, Li A, Alishahian L, Perera N, Efros C, Babu M, Tharakan M, Etienne M, Babu BA. Digital pathology, deep learning, and cancer: a narrative review. Transl Cancer Res 2024; 13:2544-2560. [PMID: 38881914 PMCID: PMC11170525 DOI: 10.21037/tcr-23-964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 03/24/2024] [Indexed: 06/18/2024]
Abstract
Background and Objective Cancer is a leading cause of morbidity and mortality worldwide. The emergence of digital pathology and deep learning technologies signifies a transformative era in healthcare. These technologies can enhance cancer detection, streamline operations, and bolster patient care. A substantial gap exists between the development phase of deep learning models in controlled laboratory environments and their translations into clinical practice. This narrative review evaluates the current landscape of deep learning and digital pathology, analyzing the factors influencing model development and implementation into clinical practice. Methods We searched multiple databases, including Web of Science, Arxiv, MedRxiv, BioRxiv, Embase, PubMed, DBLP, Google Scholar, IEEE Xplore, Semantic Scholar, and Cochrane, targeting articles on whole slide imaging and deep learning published from 2014 and 2023. Out of 776 articles identified based on inclusion criteria, we selected 36 papers for the analysis. Key Content and Findings Most articles in this review focus on the in-laboratory phase of deep learning model development, a critical stage in the deep learning lifecycle. Challenges arise during model development and their integration into clinical practice. Notably, lab performance metrics may not always match real-world clinical outcomes. As technology advances and regulations evolve, we expect more clinical trials to bridge this performance gap and validate deep learning models' effectiveness in clinical care. High clinical accuracy is vital for informed decision-making throughout a patient's cancer care. Conclusions Deep learning technology can enhance cancer detection, clinical workflows, and patient care. Challenges may arise during model development. The deep learning lifecycle involves data preprocessing, model development, and clinical implementation. Achieving health equity requires including diverse patient groups and eliminating bias during implementation. While model development is integral, most articles focus on the pre-deployment phase. Future longitudinal studies are crucial for validating models in real-world settings post-deployment. A collaborative approach among computational pathologists, technologists, industry, and healthcare providers is essential for driving adoption in clinical settings.
Collapse
Affiliation(s)
| | | | - Nowair Hussain
- Department of Internal Medicine, Overlook Medical Center, Summit, NJ, USA
| | | | | | - Arjun Reddy
- Applied Mathematics & Statistics Stony Brook University, Stony Brook, NY, USA
| | - Ali Haider
- Department of Artificial Intelligence, Yeshiva University, New York, NY, USA
| | - Bali Kumar Kavitesh
- Centre for Frontier AI Research (CFAR), Agency for Science, Technology, and Research (A*STAR), Singapore, Singapore
| | - Austin Li
- New York Medical College, Valhalla, NY, USA
| | | | | | | | - Myoungmee Babu
- Artificial Intelligence and Mathematics, New York City Department of Education, New York, NY, USA
| | | | - Mill Etienne
- Department of Neurology, New York Medical College, Valhalla, NY, USA
| | - Benson A Babu
- New York Medical College, Valhalla, NY, USA
- Department of Hospital Medicine, Wyckoff, Medical Center, New York, NY, USA
| |
Collapse
|
9
|
Jain A, Perdomo D, Nagururu N, Li JA, Ward BK, Lauer AM, Creighton FX. SVPath: A Deep Learning Tool for Analysis of Stria Vascularis from Histology Slides. J Assoc Res Otolaryngol 2024:10.1007/s10162-024-00948-z. [PMID: 38760547 DOI: 10.1007/s10162-024-00948-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Accepted: 04/18/2024] [Indexed: 05/19/2024] Open
Abstract
INTRODUCTION The stria vascularis (SV) may have a significant role in various otologic pathologies. Currently, researchers manually segment and analyze the stria vascularis to measure structural atrophy. Our group developed a tool, SVPath, that uses deep learning to extract and analyze the stria vascularis and its associated capillary bed from whole temporal bone histopathology slides (TBS). METHODS This study used an internal dataset of 203 digitized hematoxylin and eosin-stained sections from a normal macaque ear and a separate external validation set of 10 sections from another normal macaque ear. SVPath employed deep learning methods YOLOv8 and nnUnet to detect and segment the SV features from TBS, respectively. The results from this process were analyzed with the SV Analysis Tool (SVAT) to measure SV capillaries and features related to SV morphology, including width, area, and cell count. Once the model was developed, both YOLOv8 and nnUnet were validated on external and internal datasets. RESULTS YOLOv8 implementation achieved over 90% accuracy for cochlea and SV detection. nnUnet SV segmentation achieved a DICE score of 0.84-0.95; the capillary bed DICE score was 0.75-0.88. SVAT was applied to compare both the ears used in the study. There was no statistical difference in SV width, SV area, and average area of capillary between the two ears. There was a statistical difference between the two ears for the cell count per SV. CONCLUSION The proposed method accurately and efficiently analyzes the SV from temporal histopathology bone slides, creating a platform for researchers to understand the function of the SV further.
Collapse
Affiliation(s)
- Aseem Jain
- College of Medicine, University of Cincinnati, 231 Albert Sabin Way, Cincinnati, OH, 45267, USA.
| | - Dianela Perdomo
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Nimesh Nagururu
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jintong Alice Li
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Bryan K Ward
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Amanda M Lauer
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Francis X Creighton
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
10
|
Hetz MJ, Bucher TC, Brinker TJ. Multi-domain stain normalization for digital pathology: A cycle-consistent adversarial network for whole slide images. Med Image Anal 2024; 94:103149. [PMID: 38574542 DOI: 10.1016/j.media.2024.103149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 12/11/2023] [Accepted: 03/20/2024] [Indexed: 04/06/2024]
Abstract
The variation in histologic staining between different medical centers is one of the most profound challenges in the field of computer-aided diagnosis. The appearance disparity of pathological whole slide images causes algorithms to become less reliable, which in turn impedes the wide-spread applicability of downstream tasks like cancer diagnosis. Furthermore, different stainings lead to biases in the training which in case of domain shifts negatively affect the test performance. Therefore, in this paper we propose MultiStain-CycleGAN, a multi-domain approach to stain normalization based on CycleGAN. Our modifications to CycleGAN allow us to normalize images of different origins without retraining or using different models. We perform an extensive evaluation of our method using various metrics and compare it to commonly used methods that are multi-domain capable. First, we evaluate how well our method fools a domain classifier that tries to assign a medical center to an image. Then, we test our normalization on the tumor classification performance of a downstream classifier. Furthermore, we evaluate the image quality of the normalized images using the Structural similarity index and the ability to reduce the domain shift using the Fréchet inception distance. We show that our method proves to be multi-domain capable, provides a very high image quality among the compared methods, and can most reliably fool the domain classifier while keeping the tumor classifier performance high. By reducing the domain influence, biases in the data can be removed on the one hand and the origin of the whole slide image can be disguised on the other, thus enhancing patient data privacy.
Collapse
Affiliation(s)
- Martin J Hetz
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tabea-Clara Bucher
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Titus J Brinker
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
11
|
Mu Y, Tizhoosh HR, Dehkharghanian T, Alfasly S, Campbell CJV. Model-Agnostic Binary Patch Grouping for Bone Marrow Whole Slide Image Representation. THE AMERICAN JOURNAL OF PATHOLOGY 2024; 194:721-734. [PMID: 38320631 DOI: 10.1016/j.ajpath.2024.01.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 12/29/2023] [Accepted: 01/10/2024] [Indexed: 02/08/2024]
Abstract
Histopathology is the reference standard for pathology diagnosis, and has evolved with the digitization of glass slides [ie, whole slide images (WSIs)]. While trained histopathologists are able to diagnose diseases by examining WSIs visually, this process is time consuming and prone to variability. To address these issues, artificial intelligence models are being developed to generate slide-level representations of WSIs, summarizing the entire slide as a single vector. This enables various computational pathology applications, including interslide search, multimodal training, and slide-level classification. Achieving expressive and robust slide-level representations hinges on patch feature extraction and aggregation steps. This study proposed an additional binary patch grouping (BPG) step, a plugin that can be integrated into various slide-level representation pipelines, to enhance the quality of slide-level representation in bone marrow histopathology. BPG excludes patches with less clinical relevance through minimal interaction with the pathologist; a one-time human intervention for the entire process. This study further investigated domain-general versus domain-specific feature extraction models based on convolution and attention and examined two different feature aggregation methods, with and without BPG, showing BPG's generalizability. The results showed that using BPG boosts the performance of WSI retrieval (mean average precision at 10) by 4% and improves WSI classification (weighted-F1) by 5% compared to not using BPG. Additionally, domain-general large models and parameterized pooling produced the best-quality slide-level representations.
Collapse
Affiliation(s)
- Youqing Mu
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Ontario, Canada; Department of Pathology and Molecular Medicine, McMaster University, Hamilton, Ontario, Canada
| | - Hamid R Tizhoosh
- Rhazes Lab, Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Taher Dehkharghanian
- Department of Pathology and Molecular Medicine, McMaster University, Hamilton, Ontario, Canada; Department of Nephrology, University Health Network, Toronto, Ontario, Canada
| | - Saghir Alfasly
- Rhazes Lab, Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Clinton J V Campbell
- Department of Pathology and Molecular Medicine, McMaster University, Hamilton, Ontario, Canada.
| |
Collapse
|
12
|
Afonso M, Bhawsar PMS, Saha M, Almeida JS, Oliveira AL. Finding Regions of Interest in Whole Slide Images Using Multiple Instance Learning. ARXIV 2024:arXiv:2404.01446v2. [PMID: 38903738 PMCID: PMC11188133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 06/22/2024]
Abstract
Whole Slide Images (WSI), obtained by high-resolution digital scanning of microscope slides at multiple scales, are the cornerstone of modern Digital Pathology. However, they represent a particular challenge to AI-based/AI-mediated analysis because pathology labeling is typically done at slide-level, instead of tile-level. It is not just that medical diagnostics is recorded at the specimen level, the detection of oncogene mutation is also experimentally obtained, and recorded by initiatives like The Cancer Genome Atlas (TCGA), at the slide level. This configures a dual challenge: a) accurately predicting the overall cancer phenotype and b) finding out what cellular morphologies are associated with it at the tile level. To address these challenges, a weakly supervised Multiple Instance Learning (MIL) approach was explored for two prevalent cancer types, Invasive Breast Carcinoma (TCGA-BRCA) and Lung Squamous Cell Carcinoma (TCGA-LUSC). This approach was explored for tumor detection at low magnification levels and TP53 mutations at various levels. Our results show that a novel additive implementation of MIL matched the performance of reference implementation (AUC 0.96), and was only slightly outperformed by Attention MIL (AUC 0.97). More interestingly from the perspective of the molecular pathologist, these different AI architectures identify distinct sensitivities to morphological features (through the detection of Regions of Interest, RoI) at different amplification levels. Tellingly, TP53 mutation was most sensitive to features at the higher applications where cellular morphology is resolved.
Collapse
Affiliation(s)
- Martim Afonso
- Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, Lisbon, 1049-001, Portugal
| | - Praphulla M S Bhawsar
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Bethesda, 20850, Maryland, USA
| | - Monjoy Saha
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Bethesda, 20850, Maryland, USA
| | - Jonas S Almeida
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Bethesda, 20850, Maryland, USA
| | - Arlindo L Oliveira
- Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, Lisbon, 1049-001, Portugal
- INESC-ID, R. Alves Redol 9, Lisbon, 1000-029, Portugal
| |
Collapse
|
13
|
Bontempo G, Bolelli F, Porrello A, Calderara S, Ficarra E. A Graph-Based Multi-Scale Approach With Knowledge Distillation for WSI Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1412-1421. [PMID: 38015690 DOI: 10.1109/tmi.2023.3337549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
The usage of Multi Instance Learning (MIL) for classifying Whole Slide Images (WSIs) has recently increased. Due to their gigapixel size, the pixel-level annotation of such data is extremely expensive and time-consuming, practically unfeasible. For this reason, multiple automatic approaches have been raised in the last years to support clinical practice and diagnosis. Unfortunately, most state-of-the-art proposals apply attention mechanisms without considering the spatial instance correlation and usually work on a single-scale resolution. To leverage the full potential of pyramidal structured WSI, we propose a graph-based multi-scale MIL approach, DAS-MIL. Our model comprises three modules: i) a self-supervised feature extractor, ii) a graph-based architecture that precedes the MIL mechanism and aims at creating a more contextualized representation of the WSI structure by considering the mutual (spatial) instance correlation both inter and intra-scale. Finally, iii) a (self) distillation loss between resolutions is introduced to compensate for their informative gap and significantly improve the final prediction. The effectiveness of the proposed framework is demonstrated on two well-known datasets, where we outperform SOTA on WSI classification, gaining a +2.7% AUC and +3.7% accuracy on the popular Camelyon16 benchmark.
Collapse
|
14
|
Vanea C, Džigurski J, Rukins V, Dodi O, Siigur S, Salumäe L, Meir K, Parks WT, Hochner-Celnikier D, Fraser A, Hochner H, Laisk T, Ernst LM, Lindgren CM, Nellåker C. Mapping cell-to-tissue graphs across human placenta histology whole slide images using deep learning with HAPPY. Nat Commun 2024; 15:2710. [PMID: 38548713 PMCID: PMC10978962 DOI: 10.1038/s41467-024-46986-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
Accurate placenta pathology assessment is essential for managing maternal and newborn health, but the placenta's heterogeneity and temporal variability pose challenges for histology analysis. To address this issue, we developed the 'Histology Analysis Pipeline.PY' (HAPPY), a deep learning hierarchical method for quantifying the variability of cells and micro-anatomical tissue structures across placenta histology whole slide images. HAPPY differs from patch-based features or segmentation approaches by following an interpretable biological hierarchy, representing cells and cellular communities within tissues at a single-cell resolution across whole slide images. We present a set of quantitative metrics from healthy term placentas as a baseline for future assessments of placenta health and we show how these metrics deviate in placentas with clinically significant placental infarction. HAPPY's cell and tissue predictions closely replicate those from independent clinical experts and placental biology literature.
Collapse
Affiliation(s)
- Claudia Vanea
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK.
| | | | | | - Omri Dodi
- Faculty of Medicine, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Siim Siigur
- Department of Pathology, Tartu University Hospital, Tartu, Estonia
| | - Liis Salumäe
- Department of Pathology, Tartu University Hospital, Tartu, Estonia
| | - Karen Meir
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - W Tony Parks
- Department of Laboratory Medicine & Pathobiology, University of Toronto, Toronto, Canada
| | | | - Abigail Fraser
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
- MRC Integrative Epidemiology Unit at the University of Bristol, Bristol, UK
| | - Hagit Hochner
- Braun School of Public Health, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Triin Laisk
- Institute of Genomics, University of Tartu, Tartu, Estonia
| | - Linda M Ernst
- Department of Pathology and Laboratory Medicine, NorthShore University HealthSystem, Chicago, USA
- Department of Pathology, University of Chicago Pritzker School of Medicine, Chicago, USA
| | - Cecilia M Lindgren
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK
- Centre for Human Genetics, Nuffield Department, University of Oxford, Oxford, UK
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Nuffield Department of Population Health Health, University of Oxford, Oxford, UK
| | - Christoffer Nellåker
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK.
| |
Collapse
|
15
|
Dimitriou N, Arandjelović O, Harrison DJ. Magnifying Networks for Histopathological Images with Billions of Pixels. Diagnostics (Basel) 2024; 14:524. [PMID: 38472996 DOI: 10.3390/diagnostics14050524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/25/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024] Open
Abstract
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, which are often in excess of 100,000 × 100,000 pixels. In this paper, we tackle this challenge head-on by diverging from the existing approaches in the literature-which rely on the splitting of the original images into small patches-and introducing magnifying networks (MagNets). By using an attention mechanism, MagNets identify the regions of the gigapixel image that benefit from an analysis on a finer scale. This process is repeated, resulting in an attention-driven coarse-to-fine analysis of only a small portion of the information contained in the original whole-slide images. Importantly, this is achieved using minimal ground truth annotation, namely, using only global, slide-level labels. The results from our tests on the publicly available Camelyon16 and Camelyon17 datasets demonstrate the effectiveness of MagNets-as well as the proposed optimization framework-in the task of whole-slide image classification. Importantly, MagNets process at least five times fewer patches from each whole-slide image than any of the existing end-to-end approaches.
Collapse
Affiliation(s)
- Neofytos Dimitriou
- Maritime Digitalisation Centre, Cyprus Marine and Maritime Institute, Larnaca 6300, Cyprus
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - Ognjen Arandjelović
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - David J Harrison
- School of Medicine, University of St Andrews, St Andrews KY16 9TF, UK
- NHS Lothian Pathology, Division of Laboratory Medicine, Royal Infirmary of Edinburgh, Edinburgh EH16 4SA, UK
| |
Collapse
|
16
|
Schreiber BA, Denholm J, Jaeckle F, Arends MJ, Branson KM, Schönlieb CB, Soilleux EJ. Rapid artefact removal and H&E-stained tissue segmentation. Sci Rep 2024; 14:309. [PMID: 38172562 PMCID: PMC10764721 DOI: 10.1038/s41598-023-50183-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Accepted: 12/16/2023] [Indexed: 01/05/2024] Open
Abstract
We present an innovative method for rapidly segmenting haematoxylin and eosin (H&E)-stained tissue in whole-slide images (WSIs) that eliminates a wide range of undesirable artefacts such as pen marks and scanning artefacts. Our method involves taking a single-channel representation of a low-magnification RGB overview of the WSI in which the pixel values are bimodally distributed such that H&E-stained tissue is easily distinguished from both background and a wide variety of artefacts. We demonstrate our method on 30 WSIs prepared from a wide range of institutions and WSI digital scanners, each containing substantial artefacts, and compare it to segmentations provided by Otsu thresholding and Histolab tissue segmentation and pen filtering tools. We found that our method segmented the tissue and fully removed all artefacts in 29 out of 30 WSIs, whereas Otsu thresholding failed to remove any artefacts, and the Histolab pen filtering tools only partially removed the pen marks. The beauty of our approach lies in its simplicity: manipulating RGB colour space and using Otsu thresholding allows for the segmentation of H&E-stained tissue and the rapid removal of artefacts without the need for machine learning or parameter tuning.
Collapse
Affiliation(s)
- B A Schreiber
- Department of Pathology, University of Cambridge, Tennis Court Road, Cambridge, CB2 1QP, Cambridgeshire, UK.
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, Cambridgeshire, UK.
| | - J Denholm
- Department of Pathology, University of Cambridge, Tennis Court Road, Cambridge, CB2 1QP, Cambridgeshire, UK
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, Cambridgeshire, UK
- Lyzeum Ltd., Cambridge, CB1 2LA, Cambridgeshire, UK
| | - F Jaeckle
- Department of Pathology, University of Cambridge, Tennis Court Road, Cambridge, CB2 1QP, Cambridgeshire, UK
- Lyzeum Ltd., Cambridge, CB1 2LA, Cambridgeshire, UK
| | - M J Arends
- Edinburgh Pathology, Institute of Genetics and Cancer, University of Edinburgh, Crewe Road, Edinburgh, EH4 2XR, UK
| | - K M Branson
- Artificial Intelligence and Machine Learning, GSK plc., Great West Road, Brentford, TW8 9GS, Middlesex, UK
| | - C-B Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, Cambridgeshire, UK
- Lyzeum Ltd., Cambridge, CB1 2LA, Cambridgeshire, UK
| | - E J Soilleux
- Department of Pathology, University of Cambridge, Tennis Court Road, Cambridge, CB2 1QP, Cambridgeshire, UK.
- Lyzeum Ltd., Cambridge, CB1 2LA, Cambridgeshire, UK.
| |
Collapse
|
17
|
Duschner N, Baguer DO, Schmidt M, Griewank KG, Hadaschik E, Hetzer S, Wiepjes B, Le'Clerc Arrastia J, Jansen P, Maass P, Schaller J. Applying an artificial intelligence deep learning approach to routine dermatopathological diagnosis of basal cell carcinoma. J Dtsch Dermatol Ges 2023; 21:1329-1337. [PMID: 37814387 DOI: 10.1111/ddg.15180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 06/15/2023] [Indexed: 10/11/2023]
Abstract
BACKGROUND Institutes of dermatopathology are faced with considerable challenges including a continuously rising numbers of submitted specimens and a shortage of specialized health care practitioners. Basal cell carcinoma (BCC) is one of the most common tumors in the fair-skinned western population and represents a major part of samples submitted for histological evaluation. Digitalizing glass slides has enabled the application of artificial intelligence (AI)-based procedures. To date, these methods have found only limited application in routine diagnostics. The aim of this study was to establish an AI-based model for automated BCC detection. PATIENTS AND METHODS In three dermatopathological centers, daily routine practice BCC cases were digitalized. The diagnosis was made both conventionally by analog microscope and digitally through an AI-supported algorithm based on a U-Net architecture neural network. RESULTS In routine practice, the model achieved a sensitivity of 98.23% (center 1) and a specificity of 98.51%. The model generalized successfully without additional training to samples from the other centers, achieving similarly high accuracies in BCC detection (sensitivities of 97.67% and 98.57% and specificities of 96.77% and 98.73% in centers 2 and 3, respectively). In addition, automated AI-based basal cell carcinoma subtyping and tumor thickness measurement were established. CONCLUSIONS AI-based methods can detect BCC with high accuracy in a routine clinical setting and significantly support dermatopathological work.
Collapse
Affiliation(s)
| | - Daniel Otero Baguer
- Center for Technical Mathematics (ZeTeM), University of Bremen, Bremen, Germany
| | - Maximilian Schmidt
- Center for Technical Mathematics (ZeTeM), University of Bremen, Bremen, Germany
| | - Klaus Georg Griewank
- Dermatopathologie bei Mainz, Nieder-Olm, Germany
- Department of Dermatology, University Hospital Essen, Essen, Germany
| | - Eva Hadaschik
- MVZ Dermatopathology Duisburg Essen, Essen, Germany
- Department of Dermatology, University Hospital Essen, Essen, Germany
| | - Sonja Hetzer
- MVZ Dermatopathology Duisburg Essen, Essen, Germany
| | | | | | - Philipp Jansen
- Department of Dermatology and Allergology, University Hospital Bonn, Bonn, Germany
| | - Peter Maass
- Center for Technical Mathematics (ZeTeM), University of Bremen, Bremen, Germany
| | | |
Collapse
|
18
|
Duschner N, Baguer DO, Schmidt M, Griewank KG, Hadaschik E, Hetzer S, Wiepjes B, Le'Clerc Arrastia J, Jansen P, Maass P, Schaller J. Einsatz künstlicher Intelligenz mittels Deep Learning in der dermatopathologischen Routinediagnostik des Basalzellkarzinoms: Applying an artificial intelligence deep learning approach to routine dermatopathological diagnosis of basal cell carcinoma. J Dtsch Dermatol Ges 2023; 21:1329-1338. [PMID: 37946648 DOI: 10.1111/ddg.15180_g] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 06/15/2023] [Indexed: 11/12/2023]
Abstract
ZusammenfassungHintergrundDermatopathologische Institute stehen aufgrund immer höherer Anforderungen bei andererseits schwindenden Ressourcen vor zunehmenden Herausforderungen. Basalzellkarzinome stellen einen Großteil des Einsendeguts mit entsprechendem Arbeitsaufwand dar. Gleichzeitig ermöglicht die Digitalisierung von Glasobjektträgern den Einsatz künstlicher Intelligenz (KI)‐basierter Verfahren in der Dermatopathologie. Bislang haben diese Verfahren keinen Einzug in die Routinediagnostik gefunden. Ziel dieser Studie war daher, den Einsatz eines KI‐basierten Modells zur automatisierten Basalzellkarzinom‐Erkennung zu etablieren.Patienten und MethodikIn drei dermatopathologischen Zentren wurden während des täglichen Routinebetriebs Basalzellkarzinom‐Fälle digitalisiert und sowohl klassisch am Mikroskop als auch mittels KI‐basierter Methodik basierend auf neuronalen Netzen mit U‐Net‐Architektur befundet.ErgebnisseIm Routinebetrieb erzielte das Modell eine Sensitivität von 98,23 % und eine Spezifität von 98,51 % (Zentrum 1). Das Modell konnte übergangslos in den anderen Zentren Einsatz finden und erreichte ähnlich hohe Genauigkeiten in der Basalzellkarzinom‐Erkennung (Sensitivität von 97,67 % beziehungsweise 98,57 %, Spezifität von 96,77 % beziehungsweise 98,73 %). Zusätzlich wurden eine automatisierte, KI‐basierte Basalzellkarzinom‐Subtypisierung und Tumordickenmessung etabliert.SchlussfolgerungenKI‐basierte Verfahren können mit einer hohen Genauigkeit im Routinebetrieb Basalzellkarzinome erkennen und signifikant die dermatopathologische Arbeit unterstützen.
Collapse
Affiliation(s)
| | | | | | - Klaus Georg Griewank
- Dermatopathologie bei Mainz, Nieder-Olm
- Klinik für Dermatologie, Universitätsklinikum Essen
| | - Eva Hadaschik
- MVZ Dermatopathologie Duisburg Essen GmbH, Essen
- Klinik für Dermatologie, Universitätsklinikum Essen
| | - Sonja Hetzer
- MVZ Dermatopathologie Duisburg Essen GmbH, Essen
| | | | | | - Philipp Jansen
- Klinik und Poliklinik für Dermatologie und Allergologie, Universitätsklinikum Bonn
| | - Peter Maass
- Zentrum für Technomathematik (ZeTeM), Universität Bremen
| | | |
Collapse
|
19
|
Pan S, Secrier M. HistoMIL: A Python package for training multiple instance learning models on histopathology slides. iScience 2023; 26:108073. [PMID: 37860768 PMCID: PMC10583115 DOI: 10.1016/j.isci.2023.108073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 08/21/2023] [Accepted: 09/25/2023] [Indexed: 10/21/2023] Open
Abstract
Hematoxylin and eosin (H&E) stained slides are widely used in disease diagnosis. Remarkable advances in deep learning have made it possible to detect complex molecular patterns in these histopathology slides, suggesting automated approaches could help inform pathologists' decisions. Multiple instance learning (MIL) algorithms have shown promise in this context, outperforming transfer learning (TL) methods for various tasks, but their implementation and usage remains complex. We introduce HistoMIL, a Python package designed to streamline the implementation, training and inference process of MIL-based algorithms for computational pathologists and biomedical researchers. It integrates a self-supervised learning module for feature encoding, and a full pipeline encompassing TL and three MIL algorithms: ABMIL, DSMIL, and TransMIL. The PyTorch Lightning framework enables effortless customization and algorithm implementation. We illustrate HistoMIL's capabilities by building predictive models for 2,487 cancer hallmark genes on breast cancer histology slides, achieving AUROC performances of up to 85%.
Collapse
Affiliation(s)
- Shi Pan
- Department of Genetics, Evolution and Environment, UCL Genetics Institute, University College London, London WC1E 6BT, UK
| | - Maria Secrier
- Department of Genetics, Evolution and Environment, UCL Genetics Institute, University College London, London WC1E 6BT, UK
| |
Collapse
|
20
|
Yang Y, Sun K, Gao Y, Wang K, Yu G. Preparing Data for Artificial Intelligence in Pathology with Clinical-Grade Performance. Diagnostics (Basel) 2023; 13:3115. [PMID: 37835858 PMCID: PMC10572440 DOI: 10.3390/diagnostics13193115] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/15/2023] Open
Abstract
The pathology is decisive for disease diagnosis but relies heavily on experienced pathologists. In recent years, there has been growing interest in the use of artificial intelligence in pathology (AIP) to enhance diagnostic accuracy and efficiency. However, the impressive performance of deep learning-based AIP in laboratory settings often proves challenging to replicate in clinical practice. As the data preparation is important for AIP, the paper has reviewed AIP-related studies in the PubMed database published from January 2017 to February 2022, and 118 studies were included. An in-depth analysis of data preparation methods is conducted, encompassing the acquisition of pathological tissue slides, data cleaning, screening, and subsequent digitization. Expert review, image annotation, dataset division for model training and validation are also discussed. Furthermore, we delve into the reasons behind the challenges in reproducing the high performance of AIP in clinical settings and present effective strategies to enhance AIP's clinical performance. The robustness of AIP depends on a randomized collection of representative disease slides, incorporating rigorous quality control and screening, correction of digital discrepancies, reasonable annotation, and sufficient data volume. Digital pathology is fundamental in clinical-grade AIP, and the techniques of data standardization and weakly supervised learning methods based on whole slide image (WSI) are effective ways to overcome obstacles of performance reproduction. The key to performance reproducibility lies in having representative data, an adequate amount of labeling, and ensuring consistency across multiple centers. Digital pathology for clinical diagnosis, data standardization and the technique of WSI-based weakly supervised learning will hopefully build clinical-grade AIP.
Collapse
Affiliation(s)
- Yuanqing Yang
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Department of Biomedical Engineering, School of Medical, Tsinghua University, Beijing 100084, China
| | - Kai Sun
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Furong Laboratory, Changsha 410013, China
| | - Yanhua Gao
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, Xi’an 710068, China;
| | - Kuansong Wang
- Department of Pathology, School of Basic Medical Sciences, Central South University, Changsha 410013, China;
- Department of Pathology, Xiangya Hospital, Central South University, Changsha 410013, China
| | - Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
| |
Collapse
|
21
|
Mehrtens HA, Kurz A, Bucher TC, Brinker TJ. Benchmarking common uncertainty estimation methods with histopathological images under domain shift and label noise. Med Image Anal 2023; 89:102914. [PMID: 37544085 DOI: 10.1016/j.media.2023.102914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 05/17/2023] [Accepted: 07/25/2023] [Indexed: 08/08/2023]
Abstract
In the past years, deep learning has seen an increase in usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole Slide Images, with a focus on the task of selective classification, where the model should reject the classification in situations in which it is uncertain. We conduct our experiments on tile-level under the aspects of domain shift and label noise, as well as on slide-level. In our experiments, we compare Deep Ensembles, Monte-Carlo Dropout, Stochastic Variational Inference, Test-Time Data Augmentation as well as ensembles of the latter approaches. We observe that ensembles of methods generally lead to better uncertainty estimates as well as an increased robustness towards domain shifts and label noise, while contrary to results from classical computer vision benchmarks no systematic gain of the other methods can be shown. Across methods, a rejection of the most uncertain samples reliably leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
Collapse
Affiliation(s)
- Hendrik A Mehrtens
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Alexander Kurz
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tabea-Clara Bucher
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Titus J Brinker
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
22
|
Salvi M, Manini C, López JI, Fenoglio D, Molinari F. Deep learning approach for accurate prostate cancer identification and stratification using combined immunostaining of cytokeratin, p63, and racemase. Comput Med Imaging Graph 2023; 109:102288. [PMID: 37633031 DOI: 10.1016/j.compmedimag.2023.102288] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 08/12/2023] [Accepted: 08/12/2023] [Indexed: 08/28/2023]
Abstract
BACKGROUND Prostate cancer (PCa) is the most frequently diagnosed cancer in men worldwide, affecting around 1.4 million individuals. Current PCa diagnosis relies on histological analysis of prostate biopsy samples, an activity that is both time-consuming and prone to observer bias. Previous studies have demonstrated that immunostaining of cytokeratin, p63, and racemase can significantly improve the sensitivity and the specificity of PCa detection compared to traditional H&E staining. METHODS This study introduces a novel approach that combines diagnosis-specific immunohistochemical (IHC) staining and deep learning techniques to provide reliable stratification of prostate glands. Our approach leverages a customized segmentation network, called K-PPM, that incorporates adaptive kernels and multiscale feature integration to enhance the functional information of IHC. To address the high class-imbalance problem in the dataset, we propose a weighted adaptive patch-extraction and specific-class kernel update. RESULTS Our system achieved noteworthy results, with a mean Dice Score Coefficient of 90.36% and a mean absolute error of 1.64 % in specific-class gland quantification on whole slides. These findings demonstrate the potential of our system as a valuable support tool for pathologists, reducing workload and decreasing diagnostic inter-observer variability. CONCLUSIONS Our study presents innovative approaches that have broad applicability to other digital pathology areas beyond PCa diagnosis. As a fully automated system, this model can serve as a framework for improving the histological and IHC diagnosis of other types of cancer.
Collapse
Affiliation(s)
- Massimo Salvi
- Biolab, PoliToBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy.
| | - Claudia Manini
- Department of Pathology, San Giovanni Bosco Hospital, 10154 Turin, Italy; Department of Sciences of Public Health and Pediatrics, University of Turin, 10124 Turin, Italy
| | - Jose I López
- Biomarkers in Cancer Group, Biocruces-Bizkaia Health Research Institute, 48903 Barakaldo, Spain
| | - Dario Fenoglio
- Biolab, PoliToBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Filippo Molinari
- Biolab, PoliToBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| |
Collapse
|
23
|
Qin X, Ran T, Chen Y, Zhang Y, Wang D, Zhou C, Zou D. Artificial Intelligence in Endoscopic Ultrasonography-Guided Fine-Needle Aspiration/Biopsy (EUS-FNA/B) for Solid Pancreatic Lesions: Opportunities and Challenges. Diagnostics (Basel) 2023; 13:3054. [PMID: 37835797 PMCID: PMC10572518 DOI: 10.3390/diagnostics13193054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/06/2023] [Accepted: 09/06/2023] [Indexed: 10/15/2023] Open
Abstract
Solid pancreatic lesions (SPLs) encompass a variety of benign and malignant diseases and accurate diagnosis is crucial for guiding appropriate treatment decisions. Endoscopic ultrasonography-guided fine-needle aspiration/biopsy (EUS-FNA/B) serves as a front-line diagnostic tool for pancreatic mass lesions and is widely used in clinical practice. Artificial intelligence (AI) is a mathematical technique that automates the learning and recognition of data patterns. Its strong self-learning ability and unbiased nature have led to its gradual adoption in the medical field. In this paper, we describe the fundamentals of AI and provide a summary of reports on AI in EUS-FNA/B to help endoscopists understand and realize its potential in improving pathological diagnosis and guiding targeted EUS-FNA/B. However, AI models have limitations and shortages that need to be addressed before clinical use. Furthermore, as most AI studies are retrospective, large-scale prospective clinical trials are necessary to evaluate their clinical usefulness accurately. Although AI in EUS-FNA/B is still in its infancy, the constant input of clinical data and the advancements in computer technology are expected to make computer-aided diagnosis and treatment more feasible.
Collapse
Affiliation(s)
| | | | | | | | | | - Chunhua Zhou
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, China; (X.Q.); (T.R.); (Y.C.); (Y.Z.); (D.W.)
| | - Duowu Zou
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, China; (X.Q.); (T.R.); (Y.C.); (Y.Z.); (D.W.)
| |
Collapse
|
24
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
25
|
Shiffman S, Rios Piedra EA, Adedeji AO, Ruff CF, Andrews RN, Katavolos P, Liu E, Forster A, Brumm J, Fuji RN, Sullivan R. Analysis of cellularity in H&E-stained rat bone marrow tissue via deep learning. J Pathol Inform 2023; 14:100333. [PMID: 37743975 PMCID: PMC10514468 DOI: 10.1016/j.jpi.2023.100333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/18/2023] [Accepted: 08/19/2023] [Indexed: 09/26/2023] Open
Abstract
Our objective was to develop an automated deep-learning-based method to evaluate cellularity in rat bone marrow hematoxylin and eosin whole slide images for preclinical safety assessment. We trained a shallow CNN for segmenting marrow, 2 Mask R-CNN models for segmenting megakaryocytes (MKCs), and small hematopoietic cells (SHCs), and a SegNet model for segmenting red blood cells. We incorporated the models into a pipeline that identifies and counts MKCs and SHCs in rat bone marrow. We compared cell segmentation and counts that our method generated to those that pathologists generated on 10 slides with a range of cell depletion levels from 10 studies. For SHCs, we compared cell counts that our method generated to counts generated by Cellpose and Stardist. The median Dice and object Dice scores for MKCs using our method vs pathologist consensus and the inter- and intra-pathologist variation were comparable, with overlapping first-third quartile ranges. For SHCs, the median scores were close, with first-third quartile ranges partially overlapping intra-pathologist variation. For SHCs, in comparison to Cellpose and Stardist, counts from our method were closer to pathologist counts, with a smaller 95% limits of agreement range. The performance of the bone marrow analysis pipeline supports its incorporation into routine use as an aid for hematotoxicity assessment by pathologists. The pipeline could help expedite hematotoxicity assessment in preclinical studies and consequently could expedite drug development. The method may enable meta-analysis of rat bone marrow characteristics from future and historical whole slide images and may generate new biological insights from cross-study comparisons.
Collapse
Affiliation(s)
- Smadar Shiffman
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Edgar A. Rios Piedra
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Adeyemi O. Adedeji
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Catherine F. Ruff
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Rachel N. Andrews
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Paula Katavolos
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
- Bristol Myers Squibb, New Brunswick, NJ 08901, USA
| | - Evan Liu
- Genentech Research and Early Development (gRED), Department of Development Sciences Informatics, Genentech Inc, South San Francisco, USA
| | - Ashley Forster
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
- University of Pennsylvania School of Veterinary Medicine, Philadelphia, PA 19104, USA
| | - Jochen Brumm
- Genentech Research and Early Development (gRED), Department of Nonclinical Biostatistics, Genentech Inc, South San Francisco, USA
| | - Reina N. Fuji
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Ruth Sullivan
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| |
Collapse
|
26
|
Pierre K, Gupta M, Raviprasad A, Sadat Razavi SM, Patel A, Peters K, Hochhegger B, Mancuso A, Forghani R. Medical imaging and multimodal artificial intelligence models for streamlining and enhancing cancer care: opportunities and challenges. Expert Rev Anticancer Ther 2023; 23:1265-1279. [PMID: 38032181 DOI: 10.1080/14737140.2023.2286001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/16/2023] [Indexed: 12/01/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) has the potential to transform oncologic care. There have been significant developments in AI applications in medical imaging and increasing interest in multimodal models. These are likely to enable improved oncologic care through more precise diagnosis, increasingly in a more personalized and less invasive manner. In this review, we provide an overview of the current state and challenges that clinicians, administrative personnel and policy makers need to be aware of and mitigate for the technology to reach its full potential. AREAS COVERED The article provides a brief targeted overview of AI, a high-level review of the current state and future potential AI applications in diagnostic radiology and to a lesser extent digital pathology, focusing on oncologic applications. This is followed by a discussion of emerging approaches, including multimodal models. The article concludes with a discussion of technical, regulatory challenges and infrastructure needs for AI to realize its full potential. EXPERT OPINION There is a large volume of promising research, and steadily increasing commercially available tools using AI. For the most advanced and promising precision diagnostic applications of AI to be used clinically, robust and comprehensive quality monitoring systems and informatics platforms will likely be required.
Collapse
Affiliation(s)
- Kevin Pierre
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Manas Gupta
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
| | - Abheek Raviprasad
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- University of Florida College of Medicine, Gainesville, FL, USA
| | - Seyedeh Mehrsa Sadat Razavi
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- University of Florida College of Medicine, Gainesville, FL, USA
| | - Anjali Patel
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- University of Florida College of Medicine, Gainesville, FL, USA
| | - Keith Peters
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Bruno Hochhegger
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Anthony Mancuso
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Reza Forghani
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
- Division of Medical Physics, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Neurology, Division of Movement Disorders, University of Florida College of Medicine, Gainesville, FL, USA
| |
Collapse
|
27
|
Jansen P, Baguer DO, Duschner N, Arrastia JL, Schmidt M, Landsberg J, Wenzel J, Schadendorf D, Hadaschik E, Maass P, Schaller J, Griewank KG. Deep learning detection of melanoma metastases in lymph nodes. Eur J Cancer 2023; 188:161-170. [PMID: 37257277 DOI: 10.1016/j.ejca.2023.04.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 04/20/2023] [Accepted: 04/25/2023] [Indexed: 06/02/2023]
Abstract
BACKGROUND In melanoma patients, surgical excision of the first draining lymph node, the sentinel lymph node (SLN), is a routine procedure to evaluate lymphogenic metastases. Metastasis detection by histopathological analysis assesses multiple tissue levels with hematoxylin and eosin and immunohistochemically stained glass slides. Considering the amount of tissue to analyze, the detection of metastasis can be highly time-consuming for pathologists. The application of artificial intelligence in the clinical routine has constantly increased over the past few years. METHODS In this multi-center study, a deep learning method was established on histological tissue sections of sentinel lymph nodes collected from the clinical routine. The algorithm was trained to highlight potential melanoma metastases for further review by pathologists, without relying on supplementary immunohistochemical stainings (e.g. anti-S100, anti-MelanA). RESULTS The established method was able to detect the existence of metastasis on individual tissue cuts with an area under the curve of 0.9630 and 0.9856 respectively on two test cohorts from different laboratories. The method was able to accurately identify tumour deposits>0.1 mm and, by automatic tumour diameter measurement, classify these into 0.1 mm to -1.0 mm and>1.0 mm groups, thus identifying and classifying metastasis currently relevant for assessing prognosis and stratifying treatment. CONCLUSIONS Our results demonstrate that AI-based SLN melanoma metastasis detection has great potential and could become a routinely applied aid for pathologists. Our current study focused on assessing established parameters; however, larger future AI-based studies could identify novel biomarkers potentially further improving SLN-based prognostic and therapeutic predictions for affected patients.
Collapse
Affiliation(s)
- Philipp Jansen
- Department of Dermatology, University Hospital Bonn, Bonn 53127, Germany
| | | | | | | | | | - Jennifer Landsberg
- Department of Dermatology, University Hospital Bonn, Bonn 53127, Germany
| | - Jörg Wenzel
- Department of Dermatology, University Hospital Bonn, Bonn 53127, Germany
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, Essen 45147, Germany
| | - Eva Hadaschik
- Department of Dermatology, University Hospital Essen, Essen 45147, Germany
| | | | - Jörg Schaller
- Dermatopathologie Duisburg Essen GmbH, Essen 45329, Germany
| | - Klaus Georg Griewank
- Department of Dermatology, University Hospital Essen, Essen 45147, Germany; Dermatopathologie bei Mainz, Nieder-Olm 55268, Germany.
| |
Collapse
|
28
|
Amato D, Calderaro S, Lo Bosco G, Rizzo R, Vella F. Metric Learning in Histopathological Image Classification: Opening the Black Box. SENSORS (BASEL, SWITZERLAND) 2023; 23:6003. [PMID: 37447857 DOI: 10.3390/s23136003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 06/15/2023] [Accepted: 06/22/2023] [Indexed: 07/15/2023]
Abstract
The application of machine learning techniques to histopathology images enables advances in the field, providing valuable tools that can speed up and facilitate the diagnosis process. The classification of these images is a relevant aid for physicians who have to process a large number of images in long and repetitive tasks. This work proposes the adoption of metric learning that, beyond the task of classifying images, can provide additional information able to support the decision of the classification system. In particular, triplet networks have been employed to create a representation in the embedding space that gathers together images of the same class while tending to separate images with different labels. The obtained representation shows an evident separation of the classes with the possibility of evaluating the similarity and the dissimilarity among input images according to distance criteria. The model has been tested on the BreakHis dataset, a reference and largely used dataset that collects breast cancer images with eight pathology labels and four magnification levels. Our proposed classification model achieves relevant performance on the patient level, with the advantage of providing interpretable information for the obtained results, which represent a specific feature missed by the all the recent methodologies proposed for the same purpose.
Collapse
Affiliation(s)
- Domenico Amato
- Department of Mathematics and Computer Science, University of Palermo, 90123 Palermo, Italy
| | - Salvatore Calderaro
- Department of Mathematics and Computer Science, University of Palermo, 90123 Palermo, Italy
| | - Giosué Lo Bosco
- Department of Mathematics and Computer Science, University of Palermo, 90123 Palermo, Italy
| | - Riccardo Rizzo
- Institute for High-Performance Computing and Networking, National Research Council of Italy, 90146 Palermo, Italy
| | - Filippo Vella
- Institute for High-Performance Computing and Networking, National Research Council of Italy, 90146 Palermo, Italy
| |
Collapse
|
29
|
An automatic entropy method to efficiently mask histology whole-slide images. Sci Rep 2023; 13:4321. [PMID: 36922520 PMCID: PMC10017682 DOI: 10.1038/s41598-023-29638-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 02/08/2023] [Indexed: 03/18/2023] Open
Abstract
Tissue segmentation of histology whole-slide images (WSI) remains a critical task in automated digital pathology workflows for both accurate disease diagnosis and deep phenotyping for research purposes. This is especially challenging when the tissue structure of biospecimens is relatively porous and heterogeneous, such as for atherosclerotic plaques. In this study, we developed a unique approach called 'EntropyMasker' based on image entropy to tackle the fore- and background segmentation (masking) task in histology WSI. We evaluated our method on 97 high-resolution WSI of human carotid atherosclerotic plaques in the Athero-Express Biobank Study, constituting hematoxylin and eosin and 8 other staining types. Using multiple benchmarking metrics, we compared our method with four widely used segmentation methods: Otsu's method, Adaptive mean, Adaptive Gaussian and slideMask and observed that our method had the highest sensitivity and Jaccard similarity index. We envision EntropyMasker to fill an important gap in WSI preprocessing, machine learning image analysis pipelines, and enable disease phenotyping beyond the field of atherosclerosis.
Collapse
|
30
|
Reis HC, Turk V. Transfer Learning Approach and Nucleus Segmentation with MedCLNet Colon Cancer Database. J Digit Imaging 2023; 36:306-325. [PMID: 36127531 PMCID: PMC9984669 DOI: 10.1007/s10278-022-00701-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 09/07/2022] [Accepted: 09/08/2022] [Indexed: 11/30/2022] Open
Abstract
Machine learning has been recently used especially in the medical field. In the diagnosis of serious diseases such as cancer, deep learning techniques can be used to reduce the workload of experts and to produce quick solutions. The nuclei found in the histopathology dataset are an essential parameter in disease detection. The nucleus segmentation was performed using the colorectal histology MNIST dataset for nucleus detection in this study. The graph theory, PSO, watershed, and random walker algorithms were used for the segmentation process. In addition, we present the 10-class MedCLNet visual dataset consisting of the NCT-CRC-HE-100 K dataset, LC25000 dataset, and GlaS dataset that can be used in transfer learning studies from deep learning techniques. The study proposes a transfer learning technique using the MedCLNet database. Deep neural networks pre-trained with the proposed transfer learning method were used in the classification with the colorectal histology MNIST dataset in the experimental process. DenseNet201, DenseNet169, InceptionResNetV2, InceptionV3, ResNet152V2, ResNet101V2, and Xception deep learning algorithms were used in transfer learning and the classification studies. The proposed approach was analyzed before and after transfer learning with different methods (DenseNet169 + SVM, DenseNet169 + GRU). In the performance measurement, using the colorectal histology MNIST dataset, 94.29% accuracy was obtained in the DenseNet169 model, which was initiated with random weights in the multi-classification study, and 95.00% accuracy after transfer learning was applied. In comparison with the results obtained from empirical studies, it was demonstrated that the proposed method produced satisfactory outcomes. The application is expected to provide a secondary evaluation for physicians in colon cancer detection and the segmentation.
Collapse
Affiliation(s)
- Hatice Catal Reis
- Department of Geomatics Engineering, Gumushane University, Gumushane, 2900, Turkey.
| | - Veysel Turk
- Department of Computer Engineering, University of Harran, Sanliurfa, Turkey
| |
Collapse
|
31
|
Srikantamurthy MM, Rallabandi VPS, Dudekula DB, Natarajan S, Park J. Classification of benign and malignant subtypes of breast cancer histopathology imaging using hybrid CNN-LSTM based transfer learning. BMC Med Imaging 2023; 23:19. [PMID: 36717788 PMCID: PMC9885590 DOI: 10.1186/s12880-023-00964-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 01/12/2023] [Indexed: 01/31/2023] Open
Abstract
BACKGROUND Grading of cancer histopathology slides requires more pathologists and expert clinicians as well as it is time consuming to look manually into whole-slide images. Hence, an automated classification of histopathological breast cancer sub-type is useful for clinical diagnosis and therapeutic responses. Recent deep learning methods for medical image analysis suggest the utility of automated radiologic imaging classification for relating disease characteristics or diagnosis and patient stratification. METHODS To develop a hybrid model using the convolutional neural network (CNN) and the long short-term memory recurrent neural network (LSTM RNN) to classify four benign and four malignant breast cancer subtypes. The proposed CNN-LSTM leveraging on ImageNet uses a transfer learning approach in classifying and predicting four subtypes of each. The proposed model was evaluated on the BreakHis dataset comprises 2480 benign and 5429 malignant cancer images acquired at magnifications of 40×, 100×, 200× and 400×. RESULTS The proposed hybrid CNN-LSTM model was compared with the existing CNN models used for breast histopathological image classification such as VGG-16, ResNet50, and Inception models. All the models were built using three different optimizers such as adaptive moment estimator (Adam), root mean square propagation (RMSProp), and stochastic gradient descent (SGD) optimizers by varying numbers of epochs. From the results, we noticed that the Adam optimizer was the best optimizer with maximum accuracy and minimum model loss for both the training and validation sets. The proposed hybrid CNN-LSTM model showed the highest overall accuracy of 99% for binary classification of benign and malignant cancer, and, whereas, 92.5% for multi-class classifier of benign and malignant cancer subtypes, respectively. CONCLUSION To conclude, the proposed transfer learning approach outperformed the state-of-the-art machine and deep learning models in classifying benign and malignant cancer subtypes. The proposed method is feasible in classification of other cancers as well as diseases.
Collapse
Affiliation(s)
| | | | - Dawood Babu Dudekula
- 3BIGS Omicscore Pvt. Ltd., 909 Lavelle Building, Richmond Circle, Bangalore, 560025 India
| | - Sathishkumar Natarajan
- 3BIGS Co. Ltd, 156, B-831, Geumgang Penterium IX Tower, Hwaseong, 18469 Republic of Korea
| | - Junhyung Park
- 3BIGS Co. Ltd, 156, B-831, Geumgang Penterium IX Tower, Hwaseong, 18469 Republic of Korea
| |
Collapse
|
32
|
Parwani AV, Patel A, Zhou M, Cheville JC, Tizhoosh H, Humphrey P, Reuter VE, True LD. An update on computational pathology tools for genitourinary pathology practice: A review paper from the Genitourinary Pathology Society (GUPS). J Pathol Inform 2023; 14:100177. [PMID: 36654741 PMCID: PMC9841212 DOI: 10.1016/j.jpi.2022.100177] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 12/20/2022] [Accepted: 12/20/2022] [Indexed: 12/31/2022] Open
Abstract
Machine learning has been leveraged for image analysis applications throughout a multitude of subspecialties. This position paper provides a perspective on the evolutionary trajectory of practical deep learning tools for genitourinary pathology through evaluating the most recent iterations of such algorithmic devices. Deep learning tools for genitourinary pathology demonstrate potential to enhance prognostic and predictive capacity for tumor assessment including grading, staging, and subtype identification, yet limitations in data availability, regulation, and standardization have stymied their implementation.
Collapse
Affiliation(s)
- Anil V. Parwani
- The Ohio State University, Columbus, Ohio, USA
- Corresponding author.
| | - Ankush Patel
- The Ohio State University, 2441 60th Ave SE, Mercer Island, Washington 98040, USA
| | - Ming Zhou
- Tufts University, Medford, Massachusetts, USA
| | | | | | | | | | | |
Collapse
|
33
|
Frank SJ. Accurate diagnostic tissue segmentation and concurrent disease subtyping with small datasets. J Pathol Inform 2022; 14:100174. [PMID: 36687530 PMCID: PMC9852683 DOI: 10.1016/j.jpi.2022.100174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 09/01/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022] Open
Abstract
Purpose To provide a flexible, end-to-end platform for visually distinguishing diseased from undiseased tissue in a medical image, in particular pathology slides, and classifying diseased regions by subtype. Highly accurate results are obtained using small training datasets and reduced-scale source images that can be easily shared. Approach An ensemble of lightweight convolutional neural networks (CNNs) is trained on different subsets of images derived from a relatively small number of annotated whole-slide histopathology images (WSIs). The WSIs are first reduced in scale in a manner that preserves anatomic features critical to analysis while also facilitating convenient handling and storage. The segmentation and subtyping tasks are performed sequentially on the reduced-scale images using the same basic workflow: generating and sifting tiles from the image, then classifying each tile with an ensemble of appropriately trained CNNs. For segmentation, the CNN predictions are combined using a function to favor a selected similarity metric, and a mask or map for a a candidate image is produced from tiles whose combined predictions exceed a decision boundary. For subtyping, the resulting mask is applied to the candidate image, and new tiles are derived from the unoccluded regions. These are classified by the subtyping CNNs to produce an overall subtype prediction. Results and conclusion This approach was applied successfully to two very different datasets of large WSIs, one (PAIP2020) involving multiple subtypes of colorectal cancer and the other (CAMELYON16) single-type breast cancer metastases. Scored using standard similarity metrics, the segmentations outperformed more complex models typifying the state of the art.
Collapse
|
34
|
Zhao LN, Li JQ, Cheng WX, Liu SQ, Gao ZK, Xu X, Ye CH, You HL. Simulation Palynologists for Pollinosis Prevention: A Progressive Learning of Pollen Localization and Classification for Whole Slide Images. BIOLOGY 2022; 11:biology11121841. [PMID: 36552349 PMCID: PMC9775008 DOI: 10.3390/biology11121841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Revised: 12/14/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022]
Abstract
Existing API approaches usually independently leverage detection or classification models to distinguish allergic pollens from Whole Slide Images (WSIs). However, palynologists tend to identify pollen grains in a progressive learning manner instead of the above one-stage straightforward way. They generally focus on two pivotal problems during pollen identification. (1) Localization: where are the pollen grains located? (2) Classification: which categories do these pollen grains belong to? To perfectly mimic the manual observation process of the palynologists, we propose a progressive method integrating pollen localization and classification to achieve allergic pollen identification from WSIs. Specifically, data preprocessing is first used to cut WSIs into specific patches and filter out blank background patches. Subsequently, we present the multi-scale detection model to locate coarse-grained pollen regions (targeting at "pollen localization problem") and the multi-classifiers combination to determine the fine-grained category of allergic pollens (targeting at "pollen classification problem"). Extensive experimental results have demonstrated the feasibility and effectiveness of our proposed method.
Collapse
Affiliation(s)
- Lin-Na Zhao
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Jian-Qiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Wen-Xiu Cheng
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Su-Qin Liu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Zheng-Kai Gao
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Xi Xu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Correspondence:
| | - Cai-Hua Ye
- Beijing Meteorological Service Center, Beijing 100089, China
| | - Huan-Ling You
- Beijing Meteorological Service Center, Beijing 100089, China
| |
Collapse
|
35
|
Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. MICROMACHINES 2022; 13:2197. [PMID: 36557496 PMCID: PMC9781697 DOI: 10.3390/mi13122197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/04/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.
Collapse
Affiliation(s)
- Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| | - Jie Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Dayu Hu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Hui Qu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ye Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| |
Collapse
|
36
|
Kawazoe Y, Shimamoto K, Yamaguchi R, Nakamura I, Yoneda K, Shinohara E, Shintani-Domoto Y, Ushiku T, Tsukamoto T, Ohe K. Computational Pipeline for Glomerular Segmentation and Association of the Quantified Regions with Prognosis of Kidney Function in IgA Nephropathy. Diagnostics (Basel) 2022; 12:diagnostics12122955. [PMID: 36552963 PMCID: PMC9776670 DOI: 10.3390/diagnostics12122955] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 11/20/2022] [Accepted: 11/20/2022] [Indexed: 11/29/2022] Open
Abstract
The histopathological findings of the glomeruli from whole slide images (WSIs) of a renal biopsy play an important role in diagnosing and grading kidney disease. This study aimed to develop an automated computational pipeline to detect glomeruli and to segment the histopathological regions inside of the glomerulus in a WSI. In order to assess the significance of this pipeline, we conducted a multivariate regression analysis to determine whether the quantified regions were associated with the prognosis of kidney function in 46 cases of immunoglobulin A nephropathy (IgAN). The developed pipelines showed a mean intersection over union (IoU) of 0.670 and 0.693 for five classes (i.e., background, Bowman's space, glomerular tuft, crescentic, and sclerotic regions) against the WSI of its facility, and 0.678 and 0.609 against the WSI of the external facility. The multivariate analysis revealed that the predicted sclerotic regions, even those that were predicted by the external model, had a significant negative impact on the slope of the estimated glomerular filtration rate after biopsy. This is the first study to demonstrate that the quantified sclerotic regions that are predicted by an automated computational pipeline for the segmentation of the histopathological glomerular components on WSIs impact the prognosis of kidney function in patients with IgAN.
Collapse
Affiliation(s)
- Yoshimasa Kawazoe
- Artificial Intelligence in Healthcare, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
- Correspondence: ; Tel.: +81-3-5800-9077
| | - Kiminori Shimamoto
- Artificial Intelligence in Healthcare, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Ryohei Yamaguchi
- Ohshima Memorial Kisen Hospital, 3-5-15, Misaki, Chiba 274-0812, Japan
| | - Issei Nakamura
- NTT DOCOMO, Inc., Sanno Park Tower, 2-11-1, Nagata-cho, Chiyoda-ku, Tokyo 100-6150, Japan
| | - Kota Yoneda
- Department of Reproductive, Developmental, and Aging Sciences, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Emiko Shinohara
- Artificial Intelligence in Healthcare, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Yukako Shintani-Domoto
- Department of Diagnostic Pathology, Nippon Medical School Hospital, 1-1-5, Sendagi, Bunkyo-ku, Tokyo 113-8602, Japan
| | - Tetsuo Ushiku
- Department of Pathology, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Tatsuo Tsukamoto
- Department of Nephrology and Dialysis, Tazuke Kofukai Medical Research Institute, Kitano Hospital, 2-4-20, Ohgimachi, Kita-ku, Osaka 530-8480, Japan
| | - Kazuhiko Ohe
- Department of Biomedical Informatics, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| |
Collapse
|
37
|
Kim I, Kang K, Song Y, Kim TJ. Application of Artificial Intelligence in Pathology: Trends and Challenges. Diagnostics (Basel) 2022; 12:diagnostics12112794. [PMID: 36428854 PMCID: PMC9688959 DOI: 10.3390/diagnostics12112794] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/03/2022] [Accepted: 11/11/2022] [Indexed: 11/16/2022] Open
Abstract
Given the recent success of artificial intelligence (AI) in computer vision applications, many pathologists anticipate that AI will be able to assist them in a variety of digital pathology tasks. Simultaneously, tremendous advancements in deep learning have enabled a synergy with artificial intelligence (AI), allowing for image-based diagnosis on the background of digital pathology. There are efforts for developing AI-based tools to save pathologists time and eliminate errors. Here, we describe the elements in the development of computational pathology (CPATH), its applicability to AI development, and the challenges it faces, such as algorithm validation and interpretability, computing systems, reimbursement, ethics, and regulations. Furthermore, we present an overview of novel AI-based approaches that could be integrated into pathology laboratory workflows.
Collapse
Affiliation(s)
- Inho Kim
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Kyungmin Kang
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Youngjae Song
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Tae-Jung Kim
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Republic of Korea
- Correspondence: ; Tel.: +82-2-3779-2157
| |
Collapse
|
38
|
TIAToolbox as an end-to-end library for advanced tissue image analytics. COMMUNICATIONS MEDICINE 2022; 2:120. [PMID: 36168445 PMCID: PMC9509319 DOI: 10.1038/s43856-022-00186-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 09/12/2022] [Indexed: 11/12/2022] Open
Abstract
Background Computational pathology has seen rapid growth in recent years, driven by advanced deep-learning algorithms. Due to the sheer size and complexity of multi-gigapixel whole-slide images, to the best of our knowledge, there is no open-source software library providing a generic end-to-end API for pathology image analysis using best practices. Most researchers have designed custom pipelines from the bottom up, restricting the development of advanced algorithms to specialist users. To help overcome this bottleneck, we present TIAToolbox, a Python toolbox designed to make computational pathology accessible to computational, biomedical, and clinical researchers. Methods By creating modular and configurable components, we enable the implementation of computational pathology algorithms in a way that is easy to use, flexible and extensible. We consider common sub-tasks including reading whole slide image data, patch extraction, stain normalization and augmentation, model inference, and visualization. For each of these steps, we provide a user-friendly application programming interface for commonly used methods and models. Results We demonstrate the use of the interface to construct a full computational pathology deep-learning pipeline. We show, with the help of examples, how state-of-the-art deep-learning algorithms can be reimplemented in a streamlined manner using our library with minimal effort. Conclusions We provide a usable and adaptable library with efficient, cutting-edge, and unit-tested tools for data loading, pre-processing, model inference, post-processing, and visualization. This enables a range of users to easily build upon recent deep-learning developments in the computational pathology literature. Computational software is being introduced to pathology, the study of the causes and effects of disease. Recently various computational pathology algorithms have been developed to analyze digital histology images. However, the software code written for these algorithms often combines functionality from several software packages which have specific setup requirements and code styles. This makes it difficult to re-use this code in other projects. We developed a computational software named TIAToolbox to alleviate this problem and hope this will help accelerate the use of computational software in pathology. Pocock, Graham et al. present TIAToolbox, a Python toolbox for computational pathology. The extendable library can be used for data loading, pre-processing, model inference, post-processing, and visualization.
Collapse
|
39
|
Liu Y, Bilodeau E, Pollack B, Batmanghelich K. Automated detection of premalignant oral lesions on whole slide images using convolutional neural networks. Oral Oncol 2022; 134:106109. [PMID: 36126604 DOI: 10.1016/j.oraloncology.2022.106109] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/12/2022] [Accepted: 08/29/2022] [Indexed: 11/30/2022]
Abstract
INTRODUCTION Oral epithelial dysplasia (OED) is a precursor lesion to oral squamous cell carcinoma, a disease with a reported overall survival rate of 56 percent across all stages. Accurate detection of OED is critical as progression to oral cancer can be impeded with complete excision of premalignant lesions. However, previous research has demonstrated that the task of grading of OED, even when performed by highly trained experts, is subject to high rates of reader variability and misdiagnosis. Thus, our study aims to develop a convolutional neural network (CNN) model that can identify regions suspicious for OED whole-slide pathology images. METHODS During model development, we optimized key training hyperparameters including loss function on 112 pathologist annotated cases between the training and validation sets. Then, we compared OED segmentation and classification metrics between two well-established CNN architectures for medical imaging, DeepLabv3+ and UNet++. To further assess generalizability, we assessed case-level performance of a held-out test set of 44 whole-slide images. RESULTS DeepLabv3+ outperformed UNet++ in overall accuracy, precision, and segmentation metrics in a 4-fold cross validation study. When applied to the held-out test set, our best performing DeepLabv3+ model achieved an overall accuracy and F1-Score of 93.3 percent and 90.9 percent, respectively. CONCLUSION The present study trained and implemented a CNN-based deep learning model for identification and segmentation of oral epithelial dysplasia (OED) with reasonable success. Computer assisted detection was shown to be feasible in detecting premalignant/precancerous oral lesions, laying groundwork for eventual clinical implementation.
Collapse
Affiliation(s)
- Yingci Liu
- University of Pittsburgh, Department of Biomedical Informatics, 5607, Baum Boulevard, Pittsburgh, PA 15206, USA; Rutgers School of Dental Medicine, 110, Bergen St, Newark, NJ 07101, USA.
| | - Elizabeth Bilodeau
- University of Pittsburgh School of Dental Medicine, 3501 Terrace St., Pittsburgh, PA 15206, USA
| | - Brian Pollack
- University of Pittsburgh, Department of Biomedical Informatics, 5607, Baum Boulevard, Pittsburgh, PA 15206, USA
| | - Kayhan Batmanghelich
- University of Pittsburgh, Department of Biomedical Informatics, 5607, Baum Boulevard, Pittsburgh, PA 15206, USA
| |
Collapse
|
40
|
Rippner DA, Raja PV, Earles JM, Momayyezi M, Buchko A, Duong FV, Forrestel EJ, Parkinson DY, Shackel KA, Neyhart JL, McElrone AJ. A workflow for segmenting soil and plant X-ray computed tomography images with deep learning in Google's Colaboratory. FRONTIERS IN PLANT SCIENCE 2022; 13:893140. [PMID: 36176692 PMCID: PMC9514790 DOI: 10.3389/fpls.2022.893140] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 08/12/2022] [Indexed: 06/16/2023]
Abstract
X-ray micro-computed tomography (X-ray μCT) has enabled the characterization of the properties and processes that take place in plants and soils at the micron scale. Despite the widespread use of this advanced technique, major limitations in both hardware and software limit the speed and accuracy of image processing and data analysis. Recent advances in machine learning, specifically the application of convolutional neural networks to image analysis, have enabled rapid and accurate segmentation of image data. Yet, challenges remain in applying convolutional neural networks to the analysis of environmentally and agriculturally relevant images. Specifically, there is a disconnect between the computer scientists and engineers, who build these AI/ML tools, and the potential end users in agricultural research, who may be unsure of how to apply these tools in their work. Additionally, the computing resources required for training and applying deep learning models are unique, more common to computer gaming systems or graphics design work, than to traditional computational systems. To navigate these challenges, we developed a modular workflow for applying convolutional neural networks to X-ray μCT images, using low-cost resources in Google's Colaboratory web application. Here we present the results of the workflow, illustrating how parameters can be optimized to achieve best results using example scans from walnut leaves, almond flower buds, and a soil aggregate. We expect that this framework will accelerate the adoption and use of emerging deep learning techniques within the plant and soil sciences.
Collapse
Affiliation(s)
- Devin A. Rippner
- Horticultural Crops Production and Genetic Improvement Research Unit-United States Department of Agriculture-Agricultural Research Service, Prosser, WA, United States
| | - Pranav V. Raja
- Department of Biological and Agricultural Engineering, University of California, Davis, Davis, CA, United States
| | - J. Mason Earles
- Department of Biological and Agricultural Engineering, University of California, Davis, Davis, CA, United States
- Department of Viticulture and Enology, University of California, Davis, Davis, CA, United States
| | - Mina Momayyezi
- Department of Viticulture and Enology, University of California, Davis, Davis, CA, United States
| | - Alexander Buchko
- Department of Computer Science, California Polytechnic and State University, San Luis Obispo, CA, United States
| | - Fiona V. Duong
- Department of Integrative Biology, San Francisco State University, San Francisco, CA, United States
| | - Elizabeth J. Forrestel
- Department of Viticulture and Enology, University of California, Davis, Davis, CA, United States
| | - Dilworth Y. Parkinson
- Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA, United States
| | - Kenneth A. Shackel
- Department of Plant Sciences, University of California, Davis, Davis, CA, United States
| | - Jeffrey L. Neyhart
- Genetic Improvement for Fruits and Vegetables Laboratory, United States Department of Agriculture-Agricultural Research Service, Chatsworth, NJ, United States
| | - Andrew J. McElrone
- Department of Viticulture and Enology, University of California, Davis, Davis, CA, United States
- Crops Pathology and Genetics Research Unit, United States Department of Agriculture-Agricultural Research Service, Davis, CA, United States
| |
Collapse
|
41
|
Yao T, Lu Y, Long J, Jha A, Zhu Z, Asad Z, Yang H, Fogo AB, Huo Y. Glo-In-One: holistic glomerular detection, segmentation, and lesion characterization with large-scale web image mining. J Med Imaging (Bellingham) 2022; 9:052408. [PMID: 35747553 DOI: 10.1117/1.jmi.9.5.052408] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 05/31/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: The quantitative detection, segmentation, and characterization of glomeruli from high-resolution whole slide imaging (WSI) play essential roles in the computer-assisted diagnosis and scientific research in digital renal pathology. Historically, such comprehensive quantification requires extensive programming skills to be able to handle heterogeneous and customized computational tools. To bridge the gap of performing glomerular quantification for non-technical users, we develop the Glo-In-One toolkit to achieve holistic glomerular detection, segmentation, and characterization via a single line of command. Additionally, we release a large-scale collection of 30,000 unlabeled glomerular images to further facilitate the algorithmic development of self-supervised deep learning. Approach: The inputs of the Glo-In-One toolkit are WSIs, while the outputs are (1) WSI-level multi-class circle glomerular detection results (which can be directly manipulated with ImageScope), (2) glomerular image patches with segmentation masks, and (3) different lesion types. In the current version, the fine-grained global glomerulosclerosis (GGS) characterization is provided, including assessed-solidified-GSS (associated with hypertension-related injury), disappearing-GSS (a further end result of the SGGS becoming contiguous with fibrotic interstitium), and obsolescent-GSS (nonspecific GGS increasing with aging) glomeruli. To leverage the performance of the Glo-In-One toolkit, we introduce self-supervised deep learning to glomerular quantification via large-scale web image mining. Results: The GGS fine-grained classification model achieved a decent performance compared with baseline supervised methods while only using 10% of the annotated data. The glomerular detection achieved an average precision of 0.627 with circle representations, while the glomerular segmentation achieved a 0.955 patch-wise Dice dimilarity coefficient. Conclusion: We develop and release an open-source Glo-In-One toolkit, a software with holistic glomerular detection, segmentation, and lesion characterization. This toolkit is user-friendly to non-technical users via a single line of command. The toolbox and the 30,000 web mined glomerular images have been made publicly available at https://github.com/hrlblab/Glo-In-One.
Collapse
Affiliation(s)
- Tianyuan Yao
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Yuzhe Lu
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Jun Long
- Central South University, Big Data Institute, Changsha, China
| | - Aadarsh Jha
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Zheyu Zhu
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Zuhayr Asad
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Haichun Yang
- Vanderbilt University Medical Center, Department of Pathology, Microbiology and Immunology, Nashville, Tennessee, United States
| | - Agnes B Fogo
- Vanderbilt University Medical Center, Department of Pathology, Microbiology and Immunology, Nashville, Tennessee, United States
| | - Yuankai Huo
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| |
Collapse
|
42
|
Wong ANN, He Z, Leung KL, To CCK, Wong CY, Wong SCC, Yoo JS, Chan CKR, Chan AZ, Lacambra MD, Yeung MHY. Current Developments of Artificial Intelligence in Digital Pathology and Its Future Clinical Applications in Gastrointestinal Cancers. Cancers (Basel) 2022; 14:3780. [PMID: 35954443 PMCID: PMC9367360 DOI: 10.3390/cancers14153780] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 07/27/2022] [Accepted: 08/01/2022] [Indexed: 02/05/2023] Open
Abstract
The implementation of DP will revolutionize current practice by providing pathologists with additional tools and algorithms to improve workflow. Furthermore, DP will open up opportunities for development of AI-based tools for more precise and reproducible diagnosis through computational pathology. One of the key features of AI is its capability to generate perceptions and recognize patterns beyond the human senses. Thus, the incorporation of AI into DP can reveal additional morphological features and information. At the current rate of AI development and adoption of DP, the interest in computational pathology is expected to rise in tandem. There have already been promising developments related to AI-based solutions in prostate cancer detection; however, in the GI tract, development of more sophisticated algorithms is required to facilitate histological assessment of GI specimens for early and accurate diagnosis. In this review, we aim to provide an overview of the current histological practices in AP laboratories with respect to challenges faced in image preprocessing, present the existing AI-based algorithms, discuss their limitations and present clinical insight with respect to the application of AI in early detection and diagnosis of GI cancer.
Collapse
Affiliation(s)
- Alex Ngai Nick Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Zebang He
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Ka Long Leung
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Curtis Chun Kit To
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, Hong Kong SAR, China; (C.C.K.T.); (C.K.R.C.); (M.D.L.)
| | - Chun Yin Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Sze Chuen Cesar Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Jung Sun Yoo
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Cheong Kin Ronald Chan
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, Hong Kong SAR, China; (C.C.K.T.); (C.K.R.C.); (M.D.L.)
| | - Angela Zaneta Chan
- Department of Anatomical and Cellular Pathology, Prince of Wales Hospital, Shatin, Hong Kong SAR, China;
| | - Maribel D. Lacambra
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, Hong Kong SAR, China; (C.C.K.T.); (C.K.R.C.); (M.D.L.)
| | - Martin Ho Yin Yeung
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| |
Collapse
|
43
|
Jansen P, Baguer DO, Duschner N, Le’Clerc Arrastia J, Schmidt M, Wiepjes B, Schadendorf D, Hadaschik E, Maass P, Schaller J, Griewank KG. Evaluation of a Deep Learning Approach to Differentiate Bowen's Disease and Seborrheic Keratosis. Cancers (Basel) 2022; 14:cancers14143518. [PMID: 35884578 PMCID: PMC9320483 DOI: 10.3390/cancers14143518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 06/21/2022] [Accepted: 07/18/2022] [Indexed: 12/10/2022] Open
Abstract
Background: Some of the most common cutaneous neoplasms are Bowen’s disease and seborrheic keratosis, a malignant and a benign proliferation, respectively. These entities represent a significant fraction of a dermatopathologists’ workload, and in some cases, histological differentiation may be challenging. The potential of deep learning networks to distinguish these diseases is assessed. Methods: In total, 1935 whole-slide images from three institutions were scanned on two different slide scanners. A U-Net-based segmentation deep learning algorithm was trained on data from one of the centers to differentiate Bowen’s disease, seborrheic keratosis, and normal tissue, learning from annotations performed by dermatopathologists. Optimal thresholds for the class distinction of diagnoses were extracted and assessed on a test set with data from all three institutions. Results: We aimed to diagnose Bowen’s diseases with the highest sensitivity. A good performance was observed across all three centers, underlining the model’s robustness. In one of the centers, the distinction between Bowen’s disease and all other diagnoses was achieved with an AUC of 0.9858 and a sensitivity of 0.9511. Seborrheic keratosis was detected with an AUC of 0.9764 and a sensitivity of 0.9394. Nevertheless, distinguishing irritated seborrheic keratosis from Bowen’s disease remained challenging. Conclusions: Bowen’s disease and seborrheic keratosis could be correctly identified by the evaluated deep learning model on test sets from three different centers, two of which were not involved in training, and AUC scores > 0.97 were obtained. The method proved robust to changes in the staining solution and scanner model. We believe this demonstrates that deep learning algorithms can aid in clinical routine; however, the results should be confirmed by qualified histopathologists.
Collapse
Affiliation(s)
- Philipp Jansen
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany; (P.J.); (D.S.); (E.H.)
- Department of Dermatology, University Hospital Bonn, 53127 Bonn, Germany
| | - Daniel Otero Baguer
- Center for Industrial Mathematics (ZeTeM), University of Bremen, 28359 Bremen, Germany; (D.O.B.); (J.L.A.); (M.S.); (P.M.)
| | - Nicole Duschner
- Dermatopathologie Duisburg Essen GmbH, 45329 Essen, Germany; (N.D.); (B.W.); (J.S.)
| | - Jean Le’Clerc Arrastia
- Center for Industrial Mathematics (ZeTeM), University of Bremen, 28359 Bremen, Germany; (D.O.B.); (J.L.A.); (M.S.); (P.M.)
| | - Maximilian Schmidt
- Center for Industrial Mathematics (ZeTeM), University of Bremen, 28359 Bremen, Germany; (D.O.B.); (J.L.A.); (M.S.); (P.M.)
| | - Bettina Wiepjes
- Dermatopathologie Duisburg Essen GmbH, 45329 Essen, Germany; (N.D.); (B.W.); (J.S.)
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany; (P.J.); (D.S.); (E.H.)
| | - Eva Hadaschik
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany; (P.J.); (D.S.); (E.H.)
| | - Peter Maass
- Center for Industrial Mathematics (ZeTeM), University of Bremen, 28359 Bremen, Germany; (D.O.B.); (J.L.A.); (M.S.); (P.M.)
| | - Jörg Schaller
- Dermatopathologie Duisburg Essen GmbH, 45329 Essen, Germany; (N.D.); (B.W.); (J.S.)
| | - Klaus Georg Griewank
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany; (P.J.); (D.S.); (E.H.)
- Dermatopathologie bei Mainz, 55268 Nieder-Olm, Germany
- Correspondence: ; Tel.: +49-201-723-2326
| |
Collapse
|
44
|
Li R, Sharma V, Thangamani S, Yakimovich A. Open-Source Biomedical Image Analysis Models: A Meta-Analysis and Continuous Survey. FRONTIERS IN BIOINFORMATICS 2022; 2:912809. [PMID: 36304285 PMCID: PMC9580903 DOI: 10.3389/fbinf.2022.912809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 06/13/2022] [Indexed: 12/05/2022] Open
Abstract
Open-source research software has proven indispensable in modern biomedical image analysis. A multitude of open-source platforms drive image analysis pipelines and help disseminate novel analytical approaches and algorithms. Recent advances in machine learning allow for unprecedented improvement in these approaches. However, these novel algorithms come with new requirements in order to remain open source. To understand how these requirements are met, we have collected 50 biomedical image analysis models and performed a meta-analysis of their respective papers, source code, dataset, and trained model parameters. We concluded that while there are many positive trends in openness, only a fraction of all publications makes all necessary elements available to the research community.
Collapse
Affiliation(s)
- Rui Li
- Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR), Görlitz, Germany
| | - Vaibhav Sharma
- Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR), Görlitz, Germany
| | - Subasini Thangamani
- Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR), Görlitz, Germany
| | - Artur Yakimovich
- Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR), Görlitz, Germany
- Bladder Infection and Immunity Group (BIIG), Department of Renal Medicine, Division of Medicine, University College London, Royal Free Hospital Campus, London, United Kingdom
- Artificial Intelligence for Life Sciences CIC, Dorset, United Kingdom
- Roche Pharma International Informatics, Roche Diagnostics GmbH, Mannheim, Germany
- *Correspondence: Artur Yakimovich,
| |
Collapse
|
45
|
Zak J, Grzeszczyk MK, Pater A, Roszkowiak L, Siemion K, Korzynska A. Cell image augmentation for classification task using GANs on Pap smear dataset. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
46
|
Kadiyala P, Elhossiny AM, Carpenter ES. Using Single Cell Transcriptomics to Elucidate the Myeloid Compartment in Pancreatic Cancer. Front Oncol 2022; 12:881871. [PMID: 35664793 PMCID: PMC9161632 DOI: 10.3389/fonc.2022.881871] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 04/08/2022] [Indexed: 11/25/2022] Open
Abstract
Pancreatic ductal adenocarcinoma (PDAC) is a dismal disease with a 5-year survival rate of 10%. A hallmark feature of this disease is its abundant microenvironment which creates a highly immunosuppressive milieu. This is, in large part, mediated by an abundant infiltration of myeloid cells in the PDAC tumor microenvironment. Consequently, therapies that modulate myeloid function may augment the efficacy of standard of care for PDAC. Unfortunately, there is limited understanding about the various subsets of myeloid cells in PDAC, particularly in human studies. This review highlights the application of single-cell RNA sequencing to define the myeloid compartment in human PDAC and elucidate the crosstalk between myeloid cells and the other components of the tumor immune microenvironment.
Collapse
Affiliation(s)
- Padma Kadiyala
- Department of Immunology, University of Michigan, Ann Arbor, MI, United States
| | - Ahmed M. Elhossiny
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, United States
| | - Eileen S. Carpenter
- Department of Intenal Medicine, Division of Gastroenterology, Michigan Medicine, University of Michigan, Ann Arbor, MI, United States
- *Correspondence: Eileen S. Carpenter,
| |
Collapse
|
47
|
Deep Learning on Histopathological Images for Colorectal Cancer Diagnosis: A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12040837. [PMID: 35453885 PMCID: PMC9028395 DOI: 10.3390/diagnostics12040837] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 03/22/2022] [Accepted: 03/25/2022] [Indexed: 02/04/2023] Open
Abstract
Colorectal cancer (CRC) is the second most common cancer in women and the third most common in men, with an increasing incidence. Pathology diagnosis complemented with prognostic and predictive biomarker information is the first step for personalized treatment. The increased diagnostic load in the pathology laboratory, combined with the reported intra- and inter-variability in the assessment of biomarkers, has prompted the quest for reliable machine-based methods to be incorporated into the routine practice. Recently, Artificial Intelligence (AI) has made significant progress in the medical field, showing potential for clinical applications. Herein, we aim to systematically review the current research on AI in CRC image analysis. In histopathology, algorithms based on Deep Learning (DL) have the potential to assist in diagnosis, predict clinically relevant molecular phenotypes and microsatellite instability, identify histological features related to prognosis and correlated to metastasis, and assess the specific components of the tumor microenvironment.
Collapse
|
48
|
Kaur G, Rana PS, Arora V. State-of-the-art techniques using pre-operative brain MRI scans for survival prediction of glioblastoma multiforme patients and future research directions. Clin Transl Imaging 2022; 10:355-389. [PMID: 35261910 PMCID: PMC8891433 DOI: 10.1007/s40336-022-00487-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 02/15/2022] [Indexed: 11/28/2022]
Abstract
Objective Glioblastoma multiforme (GBM) is a grade IV brain tumour with very low life expectancy. Physicians and oncologists urgently require automated techniques in clinics for brain tumour segmentation (BTS) and survival prediction (SP) of GBM patients to perform precise surgery followed by chemotherapy treatment. Methods This study aims at examining the recent methodologies developed using automated learning and radiomics to automate the process of SP. Automated techniques use pre-operative raw magnetic resonance imaging (MRI) scans and clinical data related to GBM patients. All SP methods submitted for the multimodal brain tumour segmentation (BraTS) challenge are examined to extract the generic workflow for SP. Results The maximum accuracies achieved by 21 state-of-the-art different SP techniques reviewed in this study are 65.5 and 61.7% using the validation and testing subsets of the BraTS dataset, respectively. The comparisons based on segmentation architectures, SP models, training parameters and hardware configurations have been made. Conclusion The limited accuracies achieved in the literature led us to review the various automated methodologies and evaluation metrics to find out the research gaps and other findings related to the survival prognosis of GBM patients so that these accuracies can be improved in future. Finally, the paper provides the most promising future research directions to improve the performance of automated SP techniques and increase their clinical relevance.
Collapse
Affiliation(s)
- Gurinderjeet Kaur
- Computer Science and Engineering Department, Thapar Institute of Engineering and Technology, Patiala, Punjab India
| | - Prashant Singh Rana
- Computer Science and Engineering Department, Thapar Institute of Engineering and Technology, Patiala, Punjab India
| | - Vinay Arora
- Computer Science and Engineering Department, Thapar Institute of Engineering and Technology, Patiala, Punjab India
| |
Collapse
|
49
|
Wilm F, Benz M, Bruns V, Baghdadlian S, Dexl J, Hartmann D, Kuritcyn P, Weidenfeller M, Wittenberg T, Merkel S, Hartmann A, Eckstein M, Geppert CI. Fast whole-slide cartography in colon cancer histology using superpixels and CNN classification. J Med Imaging (Bellingham) 2022; 9:027501. [PMID: 35300344 PMCID: PMC8920491 DOI: 10.1117/1.jmi.9.2.027501] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 02/17/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Automatic outlining of different tissue types in digitized histological specimen provides a basis for follow-up analyses and can potentially guide subsequent medical decisions. The immense size of whole-slide-images (WSIs), however, poses a challenge in terms of computation time. In this regard, the analysis of nonoverlapping patches outperforms pixelwise segmentation approaches but still leaves room for optimization. Furthermore, the division into patches, regardless of the biological structures they contain, is a drawback due to the loss of local dependencies. Approach: We propose to subdivide the WSI into coherent regions prior to classification by grouping visually similar adjacent pixels into superpixels. Afterward, only a random subset of patches per superpixel is classified and patch labels are combined into a superpixel label. We propose a metric for identifying superpixels with an uncertain classification and evaluate two medical applications, namely tumor area and invasive margin estimation and tumor composition analysis. Results: The algorithm has been developed on 159 hand-annotated WSIs of colon resections and its performance is compared with an analysis without prior segmentation. The algorithm shows an average speed-up of 41% and an increase in accuracy from 93.8% to 95.7%. By assigning a rejection label to uncertain superpixels, we further increase the accuracy by 0.4%. While tumor area estimation shows high concordance to the annotated area, the analysis of tumor composition highlights limitations of our approach. Conclusion: By combining superpixel segmentation and patch classification, we designed a fast and accurate framework for whole-slide cartography that is AI-model agnostic and provides the basis for various medical endpoints.
Collapse
Affiliation(s)
- Frauke Wilm
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany.,Friedrich-Alexander-University, Erlangen-Nuremberg, Department of Computer Science, Erlangen, Germany
| | - Michaela Benz
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Volker Bruns
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Serop Baghdadlian
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Jakob Dexl
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - David Hartmann
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Petr Kuritcyn
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Martin Weidenfeller
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Thomas Wittenberg
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany.,Friedrich-Alexander-University, Erlangen-Nuremberg, Department of Computer Science, Erlangen, Germany
| | - Susanne Merkel
- University Hospital Erlangen, Department of Surgery, FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany
| | - Arndt Hartmann
- University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Institute of Pathology, FAU Erlangen-Nuremberg, Erlangen, Germany
| | - Markus Eckstein
- University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Institute of Pathology, FAU Erlangen-Nuremberg, Erlangen, Germany
| | - Carol Immanuel Geppert
- University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Institute of Pathology, FAU Erlangen-Nuremberg, Erlangen, Germany
| |
Collapse
|
50
|
Nam D, Chapiro J, Paradis V, Seraphin TP, Kather JN. Artificial intelligence in liver diseases: improving diagnostics, prognostics and response prediction. JHEP REPORTS : INNOVATION IN HEPATOLOGY 2022; 4:100443. [PMID: 35243281 PMCID: PMC8867112 DOI: 10.1016/j.jhepr.2022.100443] [Citation(s) in RCA: 50] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 12/26/2021] [Accepted: 01/11/2022] [Indexed: 12/18/2022]
Abstract
Clinical routine in hepatology involves the diagnosis and treatment of a wide spectrum of metabolic, infectious, autoimmune and neoplastic diseases. Clinicians integrate qualitative and quantitative information from multiple data sources to make a diagnosis, prognosticate the disease course, and recommend a treatment. In the last 5 years, advances in artificial intelligence (AI), particularly in deep learning, have made it possible to extract clinically relevant information from complex and diverse clinical datasets. In particular, histopathology and radiology image data contain diagnostic, prognostic and predictive information which AI can extract. Ultimately, such AI systems could be implemented in clinical routine as decision support tools. However, in the context of hepatology, this requires further large-scale clinical validation and regulatory approval. Herein, we summarise the state of the art in AI in hepatology with a particular focus on histopathology and radiology data. We present a roadmap for the further development of novel biomarkers in hepatology and outline critical obstacles which need to be overcome.
Collapse
|