1
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
2
|
Asadi-Aghbolaghi M, Darbandsari A, Zhang A, Contreras-Sanz A, Boschman J, Ahmadvand P, Köbel M, Farnell D, Huntsman DG, Churg A, Black PC, Wang G, Gilks CB, Farahani H, Bashashati A. Learning generalizable AI models for multi-center histopathology image classification. NPJ Precis Oncol 2024; 8:151. [PMID: 39030380 PMCID: PMC11271637 DOI: 10.1038/s41698-024-00652-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 07/11/2024] [Indexed: 07/21/2024] Open
Abstract
Investigation of histopathology slides by pathologists is an indispensable component of the routine diagnosis of cancer. Artificial intelligence (AI) has the potential to enhance diagnostic accuracy, improve efficiency, and patient outcomes in clinical pathology. However, variations in tissue preparation, staining protocols, and histopathology slide digitization could result in over-fitting of deep learning models when trained on the data from only one center, thereby underscoring the necessity to generalize deep learning networks for multi-center use. Several techniques, including the use of grayscale images, color normalization techniques, and Adversarial Domain Adaptation (ADA) have been suggested to generalize deep learning algorithms, but there are limitations to their effectiveness and discriminability. Convolutional Neural Networks (CNNs) exhibit higher sensitivity to variations in the amplitude spectrum, whereas humans predominantly rely on phase-related components for object recognition. As such, we propose Adversarial fourIer-based Domain Adaptation (AIDA) which applies the advantages of a Fourier transform in adversarial domain adaptation. We conducted a comprehensive examination of subtype classification tasks in four cancers, incorporating cases from multiple medical centers. Specifically, the datasets included multi-center data for 1113 ovarian cancer cases, 247 pleural cancer cases, 422 bladder cancer cases, and 482 breast cancer cases. Our proposed approach significantly improved performance, achieving superior classification results in the target domain, surpassing the baseline, color augmentation and normalization techniques, and ADA. Furthermore, extensive pathologist reviews suggested that our proposed approach, AIDA, successfully identifies known histotype-specific features. This superior performance highlights AIDA's potential in addressing generalization challenges in deep learning models for multi-center histopathology datasets.
Collapse
Affiliation(s)
| | - Amirali Darbandsari
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Allen Zhang
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- Vancouver General Hospital, Vancouver, BC, Canada
| | | | - Jeffrey Boschman
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Pouya Ahmadvand
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Martin Köbel
- Department of Pathology and Laboratory Medicine, University of Calgary, Calgary, AB, Canada
| | - David Farnell
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- Vancouver General Hospital, Vancouver, BC, Canada
| | - David G Huntsman
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- BC Cancer Research Institute, Vancouver, BC, Canada
| | - Andrew Churg
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- Vancouver General Hospital, Vancouver, BC, Canada
| | - Peter C Black
- Department of Urologic Sciences, University of British Columbia, Vancouver, BC, Canada
| | - Gang Wang
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - C Blake Gilks
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- Vancouver General Hospital, Vancouver, BC, Canada
| | - Hossein Farahani
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Ali Bashashati
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
3
|
Chang J, Hatfield B. Advancements in computer vision and pathology: Unraveling the potential of artificial intelligence for precision diagnosis and beyond. Adv Cancer Res 2024; 161:431-478. [PMID: 39032956 DOI: 10.1016/bs.acr.2024.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/23/2024]
Abstract
The integration of computer vision into pathology through slide digitalization represents a transformative leap in the field's evolution. Traditional pathology methods, while reliable, are often time-consuming and susceptible to intra- and interobserver variability. In contrast, computer vision, empowered by artificial intelligence (AI) and machine learning (ML), promises revolutionary changes, offering consistent, reproducible, and objective results with ever-increasing speed and scalability. The applications of advanced algorithms and deep learning architectures like CNNs and U-Nets augment pathologists' diagnostic capabilities, opening new frontiers in automated image analysis. As these technologies mature and integrate into digital pathology workflows, they are poised to provide deeper insights into disease processes, quantify and standardize biomarkers, enhance patient outcomes, and automate routine tasks, reducing pathologists' workload. However, this transformative force calls for cross-disciplinary collaboration between pathologists, computer scientists, and industry innovators to drive research and development. While acknowledging its potential, this chapter addresses the limitations of AI in pathology, encompassing technical, practical, and ethical considerations during development and implementation.
Collapse
Affiliation(s)
- Justin Chang
- Virginia Commonwealth University Health System, Richmond, VA, United States
| | - Bryce Hatfield
- Virginia Commonwealth University Health System, Richmond, VA, United States.
| |
Collapse
|
4
|
Koziarski M, Cyganek B, Niedziela P, Olborski B, Antosz Z, Żydak M, Kwolek B, Wąsowicz P, Bukała A, Swadźba J, Sitkowski P. DiagSet: a dataset for prostate cancer histopathological image classification. Sci Rep 2024; 14:6780. [PMID: 38514661 PMCID: PMC10958036 DOI: 10.1038/s41598-024-52183-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 01/15/2024] [Indexed: 03/23/2024] Open
Abstract
Cancer diseases constitute one of the most significant societal challenges. In this paper, we introduce a novel histopathological dataset for prostate cancer detection. The proposed dataset, consisting of over 2.6 million tissue patches extracted from 430 fully annotated scans, 4675 scans with assigned binary diagnoses, and 46 scans with diagnoses independently provided by a group of histopathologists can be found at https://github.com/michalkoziarski/DiagSet . Furthermore, we propose a machine learning framework for detection of cancerous tissue regions and prediction of scan-level diagnosis, utilizing thresholding to abstain from the decision in uncertain cases. The proposed approach, composed of ensembles of deep neural networks operating on the histopathological scans at different scales, achieves 94.6% accuracy in patch-level recognition and is compared in a scan-level diagnosis with 9 human histopathologists showing high statistical agreement.
Collapse
Affiliation(s)
- Michał Koziarski
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland.
- AGH University of Science and Technology, Al. Mickiewicza 30, 30-059, Kraków, Poland.
- Mila - Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, QC H2S 3H1, Canada.
| | - Bogusław Cyganek
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
- AGH University of Science and Technology, Al. Mickiewicza 30, 30-059, Kraków, Poland
| | - Przemysław Niedziela
- AGH University of Science and Technology, Al. Mickiewicza 30, 30-059, Kraków, Poland
| | - Bogusław Olborski
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
| | - Zbigniew Antosz
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
| | - Marcin Żydak
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
| | - Bogdan Kwolek
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
- AGH University of Science and Technology, Al. Mickiewicza 30, 30-059, Kraków, Poland
| | - Paweł Wąsowicz
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
| | - Andrzej Bukała
- AGH University of Science and Technology, Al. Mickiewicza 30, 30-059, Kraków, Poland
| | - Jakub Swadźba
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
- Andrzej Frycz Modrzewski Krakow University, Gustawa Herlinga-Grudzińskiego 1, 30-705, Kraków, Poland
| | - Piotr Sitkowski
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
| |
Collapse
|
5
|
Zhou H, Wang Y, Zhang B, Zhou C, Vonsky MS, Mitrofanova LB, Zou D, Li Q. Unsupervised domain adaptation for histopathology image segmentation with incomplete labels. Comput Biol Med 2024; 171:108226. [PMID: 38428096 DOI: 10.1016/j.compbiomed.2024.108226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 02/04/2024] [Accepted: 02/25/2024] [Indexed: 03/03/2024]
Abstract
Stain variations pose a major challenge to deep learning segmentation algorithms in histopathology images. Current unsupervised domain adaptation methods show promise in improving model generalization across diverse staining appearances but demand abundant accurately labeled source domain data. This paper assumes a novel scenario, namely, unsupervised domain adaptation based segmentation task with incompletely labeled source data. This paper propose a Stain-Adaptive Segmentation Network with Incomplete Labels (SASN-IL). Specifically, the algorithm consists of two stages. The first stage is an incomplete label correction stage, involving reliable model selection and label correction to rectify false-negative regions in incomplete labels. The second stage is the unsupervised domain adaptation stage, achieving segmentation on the target domain. In this stage, we introduce an adaptive stain transformation module, which adjusts the degree of transformation based on segmentation performance. We evaluate our method on a gastric cancer dataset, demonstrating significant improvements, with a 10.01% increase in Dice coefficient compared to the baseline and competitive performance relative to existing methods.
Collapse
Affiliation(s)
- Huihui Zhou
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, Shanghai 200241, China
| | - Yan Wang
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, Shanghai 200241, China
| | - Benyan Zhang
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Chunhua Zhou
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Maxim S Vonsky
- D.I. Mendeleev Institute for Metrology, St. Petersburg 190005, Russia
| | | | - Duowu Zou
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Qingli Li
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, Shanghai 200241, China.
| |
Collapse
|
6
|
Ceachi B, Cioplea M, Mustatea P, Gerald Dcruz J, Zurac S, Cauni V, Popp C, Mogodici C, Sticlaru L, Cioroianu A, Busca M, Stefan O, Tudor I, Dumitru C, Vilaia A, Oprisan A, Bastian A, Nichita L. A New Method of Artificial-Intelligence-Based Automatic Identification of Lymphovascular Invasion in Urothelial Carcinomas. Diagnostics (Basel) 2024; 14:432. [PMID: 38396472 PMCID: PMC10888137 DOI: 10.3390/diagnostics14040432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 02/07/2024] [Accepted: 02/14/2024] [Indexed: 02/25/2024] Open
Abstract
The presence of lymphovascular invasion (LVI) in urothelial carcinoma (UC) is a poor prognostic finding. This is difficult to identify on routine hematoxylin-eosin (H&E)-stained slides, but considering the costs and time required for examination, immunohistochemical stains for the endothelium are not the recommended diagnostic protocol. We developed an AI-based automated method for LVI identification on H&E-stained slides. We selected two separate groups of UC patients with transurethral resection specimens. Group A had 105 patients (100 with UC; 5 with cystitis); group B had 55 patients (all with high-grade UC; D2-40 and CD34 immunohistochemical stains performed on each block). All the group A slides and 52 H&E cases from group B showing LVI using immunohistochemistry were scanned using an Aperio GT450 automatic scanner. We performed a pixel-per-pixel semantic segmentation of selected areas, and we trained InternImage to identify several classes. The DiceCoefficient and Intersection-over-Union scores for LVI detection using our method were 0.77 and 0.52, respectively. The pathologists' H&E-based evaluation in group B revealed 89.65% specificity, 42.30% sensitivity, 67.27% accuracy, and an F1 score of 0.55, which is much lower than the algorithm's DCC of 0.77. Our model outlines LVI on H&E-stained-slides more effectively than human examiners; thus, it proves a valuable tool for pathologists.
Collapse
Affiliation(s)
- Bogdan Ceachi
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
- Zaya Artificial Intelligence, 9A Stefan Cel Mare Str., Voluntari, 077190 Ilfov, Romania; (P.M.); (J.G.D.)
- Faculty of Automatic Control and Computer Science, National University of Science and Technology Politehnica Bucharest, 313 Splaiul Independenţei, Sector 6, 060042 Bucharest, Romania
| | - Mirela Cioplea
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
- Zaya Artificial Intelligence, 9A Stefan Cel Mare Str., Voluntari, 077190 Ilfov, Romania; (P.M.); (J.G.D.)
| | - Petronel Mustatea
- Zaya Artificial Intelligence, 9A Stefan Cel Mare Str., Voluntari, 077190 Ilfov, Romania; (P.M.); (J.G.D.)
- Department of Surgery, University of Medicine and Pharmacy Carol Davila, 37 Dionisie Lupu Str., Sector 1, 020021 Bucharest, Romania
| | - Julian Gerald Dcruz
- Zaya Artificial Intelligence, 9A Stefan Cel Mare Str., Voluntari, 077190 Ilfov, Romania; (P.M.); (J.G.D.)
| | - Sabina Zurac
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
- Zaya Artificial Intelligence, 9A Stefan Cel Mare Str., Voluntari, 077190 Ilfov, Romania; (P.M.); (J.G.D.)
- Department of Pathology, University of Medicine and Pharmacy Carol Davila, 37 Dionisie Lupu Str., Sector 1, 020021 Bucharest, Romania;
| | - Victor Cauni
- Department of Urology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania
| | - Cristiana Popp
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
- Zaya Artificial Intelligence, 9A Stefan Cel Mare Str., Voluntari, 077190 Ilfov, Romania; (P.M.); (J.G.D.)
| | - Cristian Mogodici
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
- Zaya Artificial Intelligence, 9A Stefan Cel Mare Str., Voluntari, 077190 Ilfov, Romania; (P.M.); (J.G.D.)
| | - Liana Sticlaru
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
- Zaya Artificial Intelligence, 9A Stefan Cel Mare Str., Voluntari, 077190 Ilfov, Romania; (P.M.); (J.G.D.)
| | - Alexandra Cioroianu
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
- Zaya Artificial Intelligence, 9A Stefan Cel Mare Str., Voluntari, 077190 Ilfov, Romania; (P.M.); (J.G.D.)
- Department of Pathology, University of Medicine and Pharmacy Carol Davila, 37 Dionisie Lupu Str., Sector 1, 020021 Bucharest, Romania;
| | - Mihai Busca
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
- Zaya Artificial Intelligence, 9A Stefan Cel Mare Str., Voluntari, 077190 Ilfov, Romania; (P.M.); (J.G.D.)
| | - Oana Stefan
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
| | - Irina Tudor
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
| | - Carmen Dumitru
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
| | - Alexandra Vilaia
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
- Zaya Artificial Intelligence, 9A Stefan Cel Mare Str., Voluntari, 077190 Ilfov, Romania; (P.M.); (J.G.D.)
- Department of Pathology, University of Medicine and Pharmacy Carol Davila, 37 Dionisie Lupu Str., Sector 1, 020021 Bucharest, Romania;
| | - Alexandra Oprisan
- Department of Pathology, University of Medicine and Pharmacy Carol Davila, 37 Dionisie Lupu Str., Sector 1, 020021 Bucharest, Romania;
- Department of Neurology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania
| | - Alexandra Bastian
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
- Department of Pathology, University of Medicine and Pharmacy Carol Davila, 37 Dionisie Lupu Str., Sector 1, 020021 Bucharest, Romania;
| | - Luciana Nichita
- Department of Pathology, Colentina University Hospital, 21 Stefan Cel Mare Str., Sector 2, 020125 Bucharest, Romania; (B.C.); (M.C.); (C.P.); (C.M.); (L.S.); (A.C.); (M.B.); (O.S.); (I.T.); (C.D.); (A.V.); (A.B.); (L.N.)
- Zaya Artificial Intelligence, 9A Stefan Cel Mare Str., Voluntari, 077190 Ilfov, Romania; (P.M.); (J.G.D.)
- Department of Pathology, University of Medicine and Pharmacy Carol Davila, 37 Dionisie Lupu Str., Sector 1, 020021 Bucharest, Romania;
| |
Collapse
|
7
|
Sun X, Li W, Fu B, Peng Y, He J, Wang L, Yang T, Meng X, Li J, Wang J, Huang P, Wang R. TGMIL: A hybrid multi-instance learning model based on the Transformer and the Graph Attention Network for whole-slide images classification of renal cell carcinoma. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107789. [PMID: 37722310 DOI: 10.1016/j.cmpb.2023.107789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 08/30/2023] [Accepted: 09/01/2023] [Indexed: 09/20/2023]
Abstract
BACKGROUND AND OBJECTIVES The pathological diagnosis of renal cell carcinoma is crucial for treatment. Currently, the multi-instance learning method is commonly used for whole-slide image classification of renal cell carcinoma, which is mainly based on the assumption of independent identical distribution. But this is inconsistent with the need to consider the correlation between different instances in the diagnosis process. Furthermore, the problem of high resource consumption of pathology images is still urgent to be solved. Therefore, we propose a new multi-instance learning method to solve this problem. METHODS In this study, we proposed a hybrid multi-instance learning model based on the Transformer and the Graph Attention Network, called TGMIL, to achieve whole-slide image of renal cell carcinoma classification without pixel-level annotation or region of interest extraction. Our approach is divided into three steps. First, we designed a feature pyramid with the multiple low magnifications of whole-slide image named MMFP. It makes the model incorporates richer information, and reduces memory consumption as well as training time compared to the highest magnification. Second, TGMIL amalgamates the Transformer and the Graph Attention's capabilities, adeptly addressing the loss of instance contextual and spatial. Within the Graph Attention network stream, an easy and efficient approach employing max pooling and mean pooling yields the graph adjacency matrix, devoid of extra memory consumption. Finally, the outputs of two streams of TGMIL are aggregated to achieve the classification of renal cell carcinoma. RESULTS On the TCGA-RCC validation set, a public dataset for renal cell carcinoma, the area under a receiver operating characteristic (ROC) curve (AUC) and accuracy of TGMIL were 0.98±0.0015,0.9191±0.0062, respectively. It showcased remarkable proficiency on the private validation set of renal cell carcinoma pathology images, attaining AUC of 0.9386±0.0162 and ACC of 0.9197±0.0124. Furthermore, on the public breast cancer whole-slide image test dataset, CAMELYON 16, our model showed good classification performance with an accuracy of 0.8792. CONCLUSIONS TGMIL models the diagnostic process of pathologists and shows good classification performance on multiple datasets. Concurrently, the MMFP module efficiently diminishes resource requirements, offering a novel angle for exploring computational pathology images.
Collapse
Affiliation(s)
- Xinhuan Sun
- Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China; Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Wuchao Li
- Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Bangkang Fu
- Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Yunsong Peng
- Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Junjie He
- Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China; Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Lihui Wang
- Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China
| | - Tongyin Yang
- Department of Pathology, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Xue Meng
- Department of Pathology, Affiliated Hospital of Zunyi Medical University, Zunyi, 563000, China
| | - Jin Li
- Department of Pathology, Affiliated Hospital of Zunyi Medical University, Zunyi, 563000, China
| | - Jinjing Wang
- Department of Pathology, Affiliated Hospital of Zunyi Medical University, Zunyi, 563000, China
| | - Ping Huang
- Department of Pathology, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Rongpin Wang
- Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, 550002, China.
| |
Collapse
|
8
|
Slabaugh G, Beltran L, Rizvi H, Deloukas P, Marouli E. Applications of machine and deep learning to thyroid cytology and histopathology: a review. Front Oncol 2023; 13:958310. [PMID: 38023130 PMCID: PMC10661921 DOI: 10.3389/fonc.2023.958310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/12/2023] [Indexed: 12/01/2023] Open
Abstract
This review synthesises past research into how machine and deep learning can improve the cyto- and histopathology processing pipelines for thyroid cancer diagnosis. The current gold-standard preoperative technique of fine-needle aspiration cytology has high interobserver variability, often returns indeterminate samples and cannot reliably identify some pathologies; histopathology analysis addresses these issues to an extent, but it requires surgical resection of the suspicious lesions so cannot influence preoperative decisions. Motivated by these issues, as well as by the chronic shortage of trained pathologists, much research has been conducted into how artificial intelligence could improve current pipelines and reduce the pressure on clinicians. Many past studies have indicated the significant potential of automated image analysis in classifying thyroid lesions, particularly for those of papillary thyroid carcinoma, but these have generally been retrospective, so questions remain about both the practical efficacy of these automated tools and the realities of integrating them into clinical workflows. Furthermore, the nature of thyroid lesion classification is significantly more nuanced in practice than many current studies have addressed, and this, along with the heterogeneous nature of processing pipelines in different laboratories, means that no solution has proven itself robust enough for clinical adoption. There are, therefore, multiple avenues for future research: examine the practical implementation of these algorithms as pathologist decision-support systems; improve interpretability, which is necessary for developing trust with clinicians and regulators; and investigate multiclassification on diverse multicentre datasets, aiming for methods that demonstrate high performance in a process- and equipment-agnostic manner.
Collapse
Affiliation(s)
- Greg Slabaugh
- Digital Environment Research Institute, Queen Mary University of London, London, United Kingdom
| | - Luis Beltran
- Barts Health NHS Trust, The Royal London Hospital, London, United Kingdom
| | - Hasan Rizvi
- Barts Health NHS Trust, The Royal London Hospital, London, United Kingdom
| | - Panos Deloukas
- William Harvey Research Institute, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | - Eirini Marouli
- Digital Environment Research Institute, Queen Mary University of London, London, United Kingdom
- Barts Health NHS Trust, The Royal London Hospital, London, United Kingdom
- William Harvey Research Institute, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| |
Collapse
|
9
|
Pati P, Jaume G, Ayadi Z, Thandiackal K, Bozorgtabar B, Gabrani M, Goksel O. Weakly supervised joint whole-slide segmentation and classification in prostate cancer. Med Image Anal 2023; 89:102915. [PMID: 37633177 DOI: 10.1016/j.media.2023.102915] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 05/17/2023] [Accepted: 07/25/2023] [Indexed: 08/28/2023]
Abstract
The identification and segmentation of histological regions of interest can provide significant support to pathologists in their diagnostic tasks. However, segmentation methods are constrained by the difficulty in obtaining pixel-level annotations, which are tedious and expensive to collect for whole-slide images (WSI). Though several methods have been developed to exploit image-level weak-supervision for WSI classification, the task of segmentation using WSI-level labels has received very little attention. The research in this direction typically require additional supervision beyond image labels, which are difficult to obtain in real-world practice. In this study, we propose WholeSIGHT, a weakly-supervised method that can simultaneously segment and classify WSIs of arbitrary shapes and sizes. Formally, WholeSIGHT first constructs a tissue-graph representation of WSI, where the nodes and edges depict tissue regions and their interactions, respectively. During training, a graph classification head classifies the WSI and produces node-level pseudo-labels via post-hoc feature attribution. These pseudo-labels are then used to train a node classification head for WSI segmentation. During testing, both heads simultaneously render segmentation and class prediction for an input WSI. We evaluate the performance of WholeSIGHT on three public prostate cancer WSI datasets. Our method achieves state-of-the-art weakly-supervised segmentation performance on all datasets while resulting in better or comparable classification with respect to state-of-the-art weakly-supervised WSI classification methods. Additionally, we assess the generalization capability of our method in terms of segmentation and classification performance, uncertainty estimation, and model calibration. Our code is available at: https://github.com/histocartography/wholesight.
Collapse
Affiliation(s)
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber/Harvard Cancer Center, Boston, MA, USA
| | - Zeineb Ayadi
- IBM Research Europe, Zurich, Switzerland; EPFL, Lausanne, Switzerland
| | - Kevin Thandiackal
- IBM Research Europe, Zurich, Switzerland; Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland
| | | | | | - Orcun Goksel
- Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland; Department of Information Technology, Uppsala University, Sweden
| |
Collapse
|
10
|
Ahn JS, Shin S, Yang SA, Park EK, Kim KH, Cho SI, Ock CY, Kim S. Artificial Intelligence in Breast Cancer Diagnosis and Personalized Medicine. J Breast Cancer 2023; 26:405-435. [PMID: 37926067 PMCID: PMC10625863 DOI: 10.4048/jbc.2023.26.e45] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 09/25/2023] [Accepted: 10/06/2023] [Indexed: 11/07/2023] Open
Abstract
Breast cancer is a significant cause of cancer-related mortality in women worldwide. Early and precise diagnosis is crucial, and clinical outcomes can be markedly enhanced. The rise of artificial intelligence (AI) has ushered in a new era, notably in image analysis, paving the way for major advancements in breast cancer diagnosis and individualized treatment regimens. In the diagnostic workflow for patients with breast cancer, the role of AI encompasses screening, diagnosis, staging, biomarker evaluation, prognostication, and therapeutic response prediction. Although its potential is immense, its complete integration into clinical practice is challenging. Particularly, these challenges include the imperatives for extensive clinical validation, model generalizability, navigating the "black-box" conundrum, and pragmatic considerations of embedding AI into everyday clinical environments. In this review, we comprehensively explored the diverse applications of AI in breast cancer care, underlining its transformative promise and existing impediments. In radiology, we specifically address AI in mammography, tomosynthesis, risk prediction models, and supplementary imaging methods, including magnetic resonance imaging and ultrasound. In pathology, our focus is on AI applications for pathologic diagnosis, evaluation of biomarkers, and predictions related to genetic alterations, treatment response, and prognosis in the context of breast cancer diagnosis and treatment. Our discussion underscores the transformative potential of AI in breast cancer management and emphasizes the importance of focused research to realize the full spectrum of benefits of AI in patient care.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Seokhwi Kim
- Department of Pathology, Ajou University School of Medicine, Suwon, Korea
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Korea.
| |
Collapse
|
11
|
Xiang H, Shen J, Yan Q, Xu M, Shi X, Zhu X. Multi-scale representation attention based deep multiple instance learning for gigapixel whole slide image analysis. Med Image Anal 2023; 89:102890. [PMID: 37467642 DOI: 10.1016/j.media.2023.102890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/22/2023] [Accepted: 07/03/2023] [Indexed: 07/21/2023]
Abstract
Recently, convolutional neural networks (CNNs) directly using whole slide images (WSIs) for tumor diagnosis and analysis have attracted considerable attention, because they only utilize the slide-level label for model training without any additional annotations. However, it is still a challenging task to directly handle gigapixel WSIs, due to the billions of pixels and intra-variations in each WSI. To overcome this problem, in this paper, we propose a novel end-to-end interpretable deep MIL framework for WSI analysis, by using a two-branch deep neural network and a multi-scale representation attention mechanism to directly extract features from all patches of each WSI. Specifically, we first divide each WSI into bag-, patch- and cell-level images, and then assign the slide-level label to its corresponding bag-level images, so that WSI classification becomes a MIL problem. Additionally, we design a novel multi-scale representation attention mechanism, and embed it into a two-branch deep network to simultaneously mine the bag with a correct label, the significant patches and their cell-level information. Extensive experiments demonstrate the superior performance of the proposed framework over recent state-of-the-art methods, in term of classification accuracy and model interpretability. All source codes are released at: https://github.com/xhangchen/MRAN/.
Collapse
Affiliation(s)
- Hangchen Xiang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Junyi Shen
- Division of Liver Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu, 610044, China
| | - Qingguo Yan
- Department of Pathology Key Laboratory of Resource Biology and Biotechnology in Western China, Ministry of Education, School of Medicine, Northwest University, 229 Taibai North Road, Xi'an 710069, China
| | - Meilian Xu
- School of Electronic Information and Artificial Intelligence, Leshan Normal University, Leshan, 614000, China.
| | - Xiaoshuang Shi
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Xiaofeng Zhu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
12
|
Meng X, Zou T. Clinical applications of graph neural networks in computational histopathology: A review. Comput Biol Med 2023; 164:107201. [PMID: 37517325 DOI: 10.1016/j.compbiomed.2023.107201] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 06/10/2023] [Accepted: 06/19/2023] [Indexed: 08/01/2023]
Abstract
Pathological examination is the optimal approach for diagnosing cancer, and with the advancement of digital imaging technologies, it has spurred the emergence of computational histopathology. The objective of computational histopathology is to assist in clinical tasks through image processing and analysis techniques. In the early stages, the technique involved analyzing histopathology images by extracting mathematical features, but the performance of these models was unsatisfactory. With the development of artificial intelligence (AI) technologies, traditional machine learning methods were applied in this field. Although the performance of the models improved, there were issues such as poor model generalization and tedious manual feature extraction. Subsequently, the introduction of deep learning techniques effectively addressed these problems. However, models based on traditional convolutional architectures could not adequately capture the contextual information and deep biological features in histopathology images. Due to the special structure of graphs, they are highly suitable for feature extraction in tissue histopathology images and have achieved promising performance in numerous studies. In this article, we review existing graph-based methods in computational histopathology and propose a novel and more comprehensive graph construction approach. Additionally, we categorize the methods and techniques in computational histopathology according to different learning paradigms. We summarize the common clinical applications of graph-based methods in computational histopathology. Furthermore, we discuss the core concepts in this field and highlight the current challenges and future research directions.
Collapse
Affiliation(s)
- Xiangyan Meng
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| | - Tonghui Zou
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| |
Collapse
|
13
|
Glänzer L, Masalkhi HE, Roeth AA, Schmitz-Rode T, Slabu I. Vessel Delineation Using U-Net: A Sparse Labeled Deep Learning Approach for Semantic Segmentation of Histological Images. Cancers (Basel) 2023; 15:3773. [PMID: 37568589 PMCID: PMC10417575 DOI: 10.3390/cancers15153773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/20/2023] [Accepted: 07/21/2023] [Indexed: 08/13/2023] Open
Abstract
Semantic segmentation is an important imaging analysis method enabling the identification of tissue structures. Histological image segmentation is particularly challenging, having large structural information while providing only limited training data. Additionally, labeling these structures to generate training data is time consuming. Here, we demonstrate the feasibility of a semantic segmentation using U-Net with a novel sparse labeling technique. The basic U-Net architecture was extended by attention gates, residual and recurrent links, and dropout regularization. To overcome the high class imbalance, which is intrinsic to histological data, under- and oversampling and data augmentation were used. In an ablation study, various architectures were evaluated, and the best performing model was identified. This model contains attention gates, residual links, and a dropout regularization of 0.125. The segmented images show accurate delineations of the vascular structures (with a precision of 0.9088 and an AUC-ROC score of 0.9717), and the segmentation algorithm is robust to images containing staining variations and damaged tissue. These results demonstrate the feasibility of sparse labeling in combination with the modified U-Net architecture.
Collapse
Affiliation(s)
- Lukas Glänzer
- Institute of Applied Medical Engineering, Helmholtz Institute, Medical Faculty, RWTH Aachen University, Pauwelsstraße 20, 52074 Aachen, Germany; (L.G.); (H.E.M.); (T.S.-R.)
| | - Husam E. Masalkhi
- Institute of Applied Medical Engineering, Helmholtz Institute, Medical Faculty, RWTH Aachen University, Pauwelsstraße 20, 52074 Aachen, Germany; (L.G.); (H.E.M.); (T.S.-R.)
| | - Anjali A. Roeth
- Department of Visceral and Transplantation Surgery, University Hospital RWTH Aachen, Pauwelsstrasse 30, 52074 Aachen, Germany;
- Department of Surgery, Maastricht University, P. Debyelaan 25, 6229 Maastricht, The Netherlands
| | - Thomas Schmitz-Rode
- Institute of Applied Medical Engineering, Helmholtz Institute, Medical Faculty, RWTH Aachen University, Pauwelsstraße 20, 52074 Aachen, Germany; (L.G.); (H.E.M.); (T.S.-R.)
| | - Ioana Slabu
- Institute of Applied Medical Engineering, Helmholtz Institute, Medical Faculty, RWTH Aachen University, Pauwelsstraße 20, 52074 Aachen, Germany; (L.G.); (H.E.M.); (T.S.-R.)
| |
Collapse
|
14
|
Fogelberg K, Chamarthi S, Maron RC, Niebling J, Brinker TJ. Domain shifts in dermoscopic skin cancer datasets: Evaluation of essential limitations for clinical translation. N Biotechnol 2023:S1871-6784(23)00021-3. [PMID: 37146681 DOI: 10.1016/j.nbt.2023.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 04/12/2023] [Accepted: 04/26/2023] [Indexed: 05/07/2023]
Abstract
The limited ability of Convolutional Neural Networks to generalize to images from previously unseen domains is a major limitation, in particular, for safety-critical clinical tasks such as dermoscopic skin cancer classification. In order to translate CNN-based applications into the clinic, it is essential that they are able to adapt to domain shifts. Such new conditions can arise through the use of different image acquisition systems or varying lighting conditions. In dermoscopy, shifts can also occur as a change in patient age or occurence of rare lesion localizations (e.g. palms). These are not prominently represented in most training datasets and can therefore lead to a decrease in performance. In order to verify the generalizability of classification models in real world clinical settings it is crucial to have access to data which mimics such domain shifts. To our knowledge no dermoscopic image dataset exists where such domain shifts are properly described and quantified. We therefore grouped publicly available images from ISIC archive based on their metadata (e.g. acquisition location, lesion localization, patient age) to generate meaningful domains. To verify that these domains are in fact distinct, we used multiple quantification measures to estimate the presence and intensity of domain shifts. Additionally, we analyzed the performance on these domains with and without an unsupervised domain adaptation technique. We observed that in most of our grouped domains, domain shifts in fact exist. Based on our results, we believe these datasets to be helpful for testing the generalization capabilities of dermoscopic skin cancer classifiers.
Collapse
Affiliation(s)
- Katharina Fogelberg
- Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sireesha Chamarthi
- Data Analysis and Intelligence, German Aerospace Center (DLR - Institute of Data science), Jena, Germany
| | - Roman C Maron
- Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Julia Niebling
- Data Analysis and Intelligence, German Aerospace Center (DLR - Institute of Data science), Jena, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
15
|
Cudic M, Diamond JS, Noble JA. Unpaired mesh-to-image translation for 3D fluorescent microscopy images of neurons. Med Image Anal 2023; 86:102768. [PMID: 36857945 DOI: 10.1016/j.media.2023.102768] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 01/18/2023] [Accepted: 02/08/2023] [Indexed: 02/12/2023]
Abstract
While Generative Adversarial Networks (GANs) can now reliably produce realistic images in a multitude of imaging domains, they are ill-equipped to model thin, stochastic textures present in many large 3D fluorescent microscopy (FM) images acquired in biological research. This is especially problematic in neuroscience where the lack of ground truth data impedes the development of automated image analysis algorithms for neurons and neural populations. We therefore propose an unpaired mesh-to-image translation methodology for generating volumetric FM images of neurons from paired ground truths. We start by learning unique FM styles efficiently through a Gramian-based discriminator. Then, we stylize 3D voxelized meshes of previously reconstructed neurons by successively generating slices. As a result, we effectively create a synthetic microscope and can acquire realistic FM images of neurons with control over the image content and imaging configurations. We demonstrate the feasibility of our architecture and its superior performance compared to state-of-the-art image translation architectures through a variety of texture-based metrics, unsupervised segmentation accuracy, and an expert opinion test. In this study, we use 2 synthetic FM datasets and 2 newly acquired FM datasets of retinal neurons.
Collapse
Affiliation(s)
- Mihael Cudic
- National Institutes of Health Oxford-Cambridge Scholars Program, USA; National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA; Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK
| | - Jeffrey S Diamond
- National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK.
| |
Collapse
|
16
|
Artificial intelligence in breast pathology - dawn of a new era. NPJ Breast Cancer 2023; 9:5. [PMID: 36720886 PMCID: PMC9889344 DOI: 10.1038/s41523-023-00507-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 01/10/2023] [Indexed: 02/02/2023] Open
|
17
|
Marini N, Marchesin S, Otálora S, Wodzinski M, Caputo A, van Rijthoven M, Aswolinskiy W, Bokhorst JM, Podareanu D, Petters E, Boytcheva S, Buttafuoco G, Vatrano S, Fraggetta F, van der Laak J, Agosti M, Ciompi F, Silvello G, Muller H, Atzori M. Unleashing the potential of digital pathology data by training computer-aided diagnosis models without human annotations. NPJ Digit Med 2022; 5:102. [PMID: 35869179 PMCID: PMC9307641 DOI: 10.1038/s41746-022-00635-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 06/24/2022] [Indexed: 01/02/2023] Open
Abstract
The digitalization of clinical workflows and the increasing performance of deep learning algorithms are paving the way towards new methods for tackling cancer diagnosis. However, the availability of medical specialists to annotate digitized images and free-text diagnostic reports does not scale with the need for large datasets required to train robust computer-aided diagnosis methods that can target the high variability of clinical cases and data produced. This work proposes and evaluates an approach to eliminate the need for manual annotations to train computer-aided diagnosis tools in digital pathology. The approach includes two components, to automatically extract semantically meaningful concepts from diagnostic reports and use them as weak labels to train convolutional neural networks (CNNs) for histopathology diagnosis. The approach is trained (through 10-fold cross-validation) on 3’769 clinical images and reports, provided by two hospitals and tested on over 11’000 images from private and publicly available datasets. The CNN, trained with automatically generated labels, is compared with the same architecture trained with manual labels. Results show that combining text analysis and end-to-end deep neural networks allows building computer-aided diagnosis tools that reach solid performance (micro-accuracy = 0.908 at image-level) based only on existing clinical data without the need for manual annotations.
Collapse
|
18
|
Basso MN, Barua M, Meyer J, John R, Khademi A. Machine learning in renal pathology. FRONTIERS IN NEPHROLOGY 2022; 2:1007002. [PMID: 37675000 PMCID: PMC10479587 DOI: 10.3389/fneph.2022.1007002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 11/09/2022] [Indexed: 09/08/2023]
Abstract
Introduction When assessing kidney biopsies, pathologists use light microscopy, immunofluorescence, and electron microscopy to describe and diagnose glomerular lesions and diseases. These methods can be laborious, costly, fraught with inter-observer variability, and can have delays in turn-around time. Thus, computational approaches can be designed as screening and/or diagnostic tools, potentially relieving pathologist time, healthcare resources, while also having the ability to identify novel biomarkers, including subvisual features. Methods Here, we implement our recently published biomarker feature extraction (BFE) model along with 3 pre-trained deep learning models (VGG16, VGG19, and InceptionV3) to diagnose 3 glomerular diseases using PAS-stained digital pathology images alone. The BFE model extracts a panel of 233 explainable features related to underlying pathology, which are subsequently narrowed down to 10 morphological and microstructural texture features for classification with a linear discriminant analysis machine learning classifier. 45 patient renal biopsies (371 glomeruli) from minimal change disease (MCD), membranous nephropathy (MN), and thin-basement membrane nephropathy (TBMN) were split into training/validation and held out sets. For the 3 deep learningmodels, data augmentation and Grad-CAM were used for better performance and interpretability. Results The BFE model showed glomerular validation accuracy of 67.6% and testing accuracy of 76.8%. All deep learning approaches had higher validation accuracies (most for VGG16 at 78.5%) but lower testing accuracies. The highest testing accuracy at the glomerular level was VGG16 at 71.9%, while at the patient-level was InceptionV3 at 73.3%. Discussion The results highlight the potential of both traditional machine learning and deep learning-based approaches for kidney biopsy evaluation.
Collapse
Affiliation(s)
- Matthew Nicholas Basso
- Image Analysis in Medicine Lab (IAMLAB), Department of Electrical, Computer, and Biomedical Engineering, Ryerson University, Toronto, ON, Canada
| | - Moumita Barua
- Division of Nephrology, University Health Network, Toronto, ON, Canada
- Toronto General Hospital Research Institute, Toronto General Hospital, Toronto, ON, Canada
- Department of Medicine, University of Toronto, Toronto, ON, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada
| | - Julien Meyer
- School of Health Services Management, Ryerson University, Toronto, ON, Canada
| | - Rohan John
- Department of Pathology, University Health Network, Toronto, ON, Canada
| | - April Khademi
- Image Analysis in Medicine Lab (IAMLAB), Department of Electrical, Computer, and Biomedical Engineering, Ryerson University, Toronto, ON, Canada
- Keenan Research Center for Biomedical Science, St. Michael’s Hospital, Unity Health Network, Toronto, ON, Canada
- Institute for Biomedical Engineering, Science, and Technology (iBEST), a partnership between St. Michael’s Hospital and Ryerson University, Toronto, ON, Canada
| |
Collapse
|
19
|
Gerussi A, Scaravaglio M, Cristoferi L, Verda D, Milani C, De Bernardi E, Ippolito D, Asselta R, Invernizzi P, Kather JN, Carbone M. Artificial intelligence for precision medicine in autoimmune liver disease. Front Immunol 2022; 13:966329. [PMID: 36439097 PMCID: PMC9691668 DOI: 10.3389/fimmu.2022.966329] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 10/13/2022] [Indexed: 09/10/2023] Open
Abstract
Autoimmune liver diseases (AiLDs) are rare autoimmune conditions of the liver and the biliary tree with unknown etiology and limited treatment options. AiLDs are inherently characterized by a high degree of complexity, which poses great challenges in understanding their etiopathogenesis, developing novel biomarkers and risk-stratification tools, and, eventually, generating new drugs. Artificial intelligence (AI) is considered one of the best candidates to support researchers and clinicians in making sense of biological complexity. In this review, we offer a primer on AI and machine learning for clinicians, and discuss recent available literature on its applications in medicine and more specifically how it can help to tackle major unmet needs in AiLDs.
Collapse
Affiliation(s)
- Alessio Gerussi
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- European Reference Network on Hepatological Diseases (ERN RARE-LIVER), San Gerardo Hospital, Monza, Italy
| | - Miki Scaravaglio
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- European Reference Network on Hepatological Diseases (ERN RARE-LIVER), San Gerardo Hospital, Monza, Italy
| | - Laura Cristoferi
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- European Reference Network on Hepatological Diseases (ERN RARE-LIVER), San Gerardo Hospital, Monza, Italy
- Bicocca Bioinformatics Biostatistics and Bioimaging Centre - B4, School of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
| | | | - Chiara Milani
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- European Reference Network on Hepatological Diseases (ERN RARE-LIVER), San Gerardo Hospital, Monza, Italy
| | - Elisabetta De Bernardi
- Department of Medicine and Surgery and Tecnomed Foundation, University of Milano - Bicocca, Monza, Italy
| | | | - Rosanna Asselta
- Humanitas Clinical and Research Center, Rozzano, Milan, Italy
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy
| | - Pietro Invernizzi
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- European Reference Network on Hepatological Diseases (ERN RARE-LIVER), San Gerardo Hospital, Monza, Italy
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Marco Carbone
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- European Reference Network on Hepatological Diseases (ERN RARE-LIVER), San Gerardo Hospital, Monza, Italy
| |
Collapse
|
20
|
Yu B, Chen H, Zhang Y, Cong L, Pang S, Zhou H, Wang Z, Cong X. Data and knowledge co-driving for cancer subtype classification on multi-scale histopathological slides. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.110168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
21
|
Shmatko A, Ghaffari Laleh N, Gerstung M, Kather JN. Artificial intelligence in histopathology: enhancing cancer research and clinical oncology. NATURE CANCER 2022; 3:1026-1038. [PMID: 36138135 DOI: 10.1038/s43018-022-00436-4] [Citation(s) in RCA: 138] [Impact Index Per Article: 69.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
Artificial intelligence (AI) methods have multiplied our capabilities to extract quantitative information from digital histopathology images. AI is expected to reduce workload for human experts, improve the objectivity and consistency of pathology reports, and have a clinical impact by extracting hidden information from routinely available data. Here, we describe how AI can be used to predict cancer outcome, treatment response, genetic alterations and gene expression from digitized histopathology slides. We summarize the underlying technologies and emerging approaches, noting limitations, including the need for data sharing and standards. Finally, we discuss the broader implications of AI in cancer research and oncology.
Collapse
Affiliation(s)
- Artem Shmatko
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, UK
| | | | - Moritz Gerstung
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
- European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, UK.
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany.
- Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany.
- Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK.
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany.
| |
Collapse
|
22
|
Wong ANN, He Z, Leung KL, To CCK, Wong CY, Wong SCC, Yoo JS, Chan CKR, Chan AZ, Lacambra MD, Yeung MHY. Current Developments of Artificial Intelligence in Digital Pathology and Its Future Clinical Applications in Gastrointestinal Cancers. Cancers (Basel) 2022; 14:3780. [PMID: 35954443 PMCID: PMC9367360 DOI: 10.3390/cancers14153780] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 07/27/2022] [Accepted: 08/01/2022] [Indexed: 02/05/2023] Open
Abstract
The implementation of DP will revolutionize current practice by providing pathologists with additional tools and algorithms to improve workflow. Furthermore, DP will open up opportunities for development of AI-based tools for more precise and reproducible diagnosis through computational pathology. One of the key features of AI is its capability to generate perceptions and recognize patterns beyond the human senses. Thus, the incorporation of AI into DP can reveal additional morphological features and information. At the current rate of AI development and adoption of DP, the interest in computational pathology is expected to rise in tandem. There have already been promising developments related to AI-based solutions in prostate cancer detection; however, in the GI tract, development of more sophisticated algorithms is required to facilitate histological assessment of GI specimens for early and accurate diagnosis. In this review, we aim to provide an overview of the current histological practices in AP laboratories with respect to challenges faced in image preprocessing, present the existing AI-based algorithms, discuss their limitations and present clinical insight with respect to the application of AI in early detection and diagnosis of GI cancer.
Collapse
Affiliation(s)
- Alex Ngai Nick Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Zebang He
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Ka Long Leung
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Curtis Chun Kit To
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, Hong Kong SAR, China; (C.C.K.T.); (C.K.R.C.); (M.D.L.)
| | - Chun Yin Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Sze Chuen Cesar Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Jung Sun Yoo
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Cheong Kin Ronald Chan
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, Hong Kong SAR, China; (C.C.K.T.); (C.K.R.C.); (M.D.L.)
| | - Angela Zaneta Chan
- Department of Anatomical and Cellular Pathology, Prince of Wales Hospital, Shatin, Hong Kong SAR, China;
| | - Maribel D. Lacambra
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, Hong Kong SAR, China; (C.C.K.T.); (C.K.R.C.); (M.D.L.)
| | - Martin Ho Yin Yeung
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| |
Collapse
|
23
|
Wang Z, Zhu X, Li A, Wang Y, Meng G, Wang M. Global and local attentional feature alignment for domain adaptive nuclei detection in histopathology images. Artif Intell Med 2022; 132:102341. [DOI: 10.1016/j.artmed.2022.102341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 06/08/2022] [Accepted: 06/27/2022] [Indexed: 11/02/2022]
|
24
|
Li X, Cen M, Xu J, Zhang H, Xu XS. Improving feature extraction from histopathological images through a fine-tuning ImageNet model. J Pathol Inform 2022; 13:100115. [PMID: 36268072 PMCID: PMC9577036 DOI: 10.1016/j.jpi.2022.100115] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 06/05/2022] [Accepted: 06/24/2022] [Indexed: 11/04/2022] Open
Abstract
Background Due to lack of annotated pathological images, transfer learning has been the predominant approach in the field of digital pathology. Pre-trained neural networks based on ImageNet database are often used to extract "off-the-shelf" features, achieving great success in predicting tissue types, molecular features, and clinical outcomes, etc. We hypothesize that fine-tuning the pre-trained models using histopathological images could further improve feature extraction, and downstream prediction performance. Methods We used 100 000 annotated H&E image patches for colorectal cancer (CRC) to fine-tune a pre-trained Xception model via a 2-step approach. The features extracted from fine-tuned Xception (FTX-2048) model and Image-pretrained (IMGNET-2048) model were compared through: (1) tissue classification for H&E images from CRC, same image type that was used for fine-tuning; (2) prediction of immune-related gene expression, and (3) gene mutations for lung adenocarcinoma (LUAD). Five-fold cross validation was used for model performance evaluation. Each experiment was repeated 50 times. Findings The extracted features from the fine-tuned FTX-2048 exhibited significantly higher accuracy (98.4%) for predicting tissue types of CRC compared to the "off-the-shelf" features directly from Xception based on ImageNet database (96.4%) (P value = 2.2 × 10-6). Particularly, FTX-2048 markedly improved the accuracy for stroma from 87% to 94%. Similarly, features from FTX-2048 boosted the prediction of transcriptomic expression of immune-related genes in LUAD. For the genes that had significant relationships with image features (P < 0.05, n = 171), the features from the fine-tuned model improved the prediction for the majority of the genes (139; 81%). In addition, features from FTX-2048 improved prediction of mutation for 5 out of 9 most frequently mutated genes (STK11, TP53, LRP1B, NF1, and FAT1) in LUAD. Conclusions We proved the concept that fine-tuning the pretrained ImageNet neural networks with histopathology images can produce higher quality features and better prediction performance for not only the same-cancer tissue classification where similar images from the same cancer are used for fine-tuning, but also cross-cancer prediction for gene expression and mutation at patient level.
Collapse
Affiliation(s)
- Xingyu Li
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Min Cen
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Jinfeng Xu
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong
| | - Hong Zhang
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Xu Steven Xu
- Clinical Pharmacology and Quantitative Science, Genmab Inc., Princeton, New Jersey, USA
| |
Collapse
|
25
|
Classification of histopathological whole slide images based on multiple weighted semi-supervised domain adaptation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103400] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
26
|
Yamashita R, Long J, Banda S, Shen J, Rubin DL. Learning Domain-Agnostic Visual Representation for Computational Pathology Using Medically-Irrelevant Style Transfer Augmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3945-3954. [PMID: 34339370 DOI: 10.1109/tmi.2021.3101985] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Suboptimal generalization of machine learning models on unseen data is a key challenge which hampers the clinical applicability of such models to medical imaging. Although various methods such as domain adaptation and domain generalization have evolved to combat this challenge, learning robust and generalizable representations is core to medical image understanding, and continues to be a problem. Here, we propose STRAP (Style TRansfer Augmentation for histoPathology), a form of data augmentation based on random style transfer from non-medical style sources such as artistic paintings, for learning domain-agnostic visual representations in computational pathology. Style transfer replaces the low-level texture content of an image with the uninformative style of randomly selected style source image, while preserving the original high-level semantic content. This improves robustness to domain shift and can be used as a simple yet powerful tool for learning domain-agnostic representations. We demonstrate that STRAP leads to state-of-the-art performance, particularly in the presence of domain shifts, on two particular classification tasks in computational pathology. Our code is available at https://github.com/rikiyay/style-transfer-for-digital-pathology.
Collapse
|
27
|
Jose L, Liu S, Russo C, Nadort A, Di Ieva A. Generative Adversarial Networks in Digital Pathology and Histopathological Image Processing: A Review. J Pathol Inform 2021; 12:43. [PMID: 34881098 PMCID: PMC8609288 DOI: 10.4103/jpi.jpi_103_20] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 03/03/2021] [Accepted: 04/23/2021] [Indexed: 12/13/2022] Open
Abstract
Digital pathology is gaining prominence among the researchers with developments in advanced imaging modalities and new technologies. Generative adversarial networks (GANs) are a recent development in the field of artificial intelligence and since their inception, have boosted considerable interest in digital pathology. GANs and their extensions have opened several ways to tackle many challenging histopathological image processing problems such as color normalization, virtual staining, ink removal, image enhancement, automatic feature extraction, segmentation of nuclei, domain adaptation and data augmentation. This paper reviews recent advances in histopathological image processing using GANs with special emphasis on the future perspectives related to the use of such a technique. The papers included in this review were retrieved by conducting a keyword search on Google Scholar and manually selecting the papers on the subject of H&E stained digital pathology images for histopathological image processing. In the first part, we describe recent literature that use GANs in various image preprocessing tasks such as stain normalization, virtual staining, image enhancement, ink removal, and data augmentation. In the second part, we describe literature that use GANs for image analysis, such as nuclei detection, segmentation, and feature extraction. This review illustrates the role of GANs in digital pathology with the objective to trigger new research on the application of generative models in future research in digital pathology informatics.
Collapse
Affiliation(s)
- Laya Jose
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
- ARC Centre of Excellence for Nanoscale Biophotonics,
Macquarie University, Sydney, Australia
| | - Sidong Liu
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
- Australian Institute of Health Innovation, Centre for
Health Informatics, Macquarie University, Sydney, Australia
| | - Carlo Russo
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
| | - Annemarie Nadort
- ARC Centre of Excellence for Nanoscale Biophotonics,
Macquarie University, Sydney, Australia
- Department of Physics and Astronomy, Faculty of Science
and Engineering, Macquarie University, Sydney, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
| |
Collapse
|
28
|
Tomczak A, Ilic S, Marquardt G, Engel T, Forster F, Navab N, Albarqouni S. Multi-Task Multi-Domain Learning for Digital Staining and Classification of Leukocytes. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2897-2910. [PMID: 33347406 DOI: 10.1109/tmi.2020.3046334] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This paper addresses digital staining and classification of the unstained white blood cell images obtained with a differential contrast microscope. We have data coming from multiple domains that are partially labeled and partially matching across the domains. Using unstained images removes time-consuming staining procedures and could facilitate and automatize comprehensive diagnostics. To this aim, we propose a method that translates unstained images to realistically looking stained images preserving the inter-cellular structures, crucial for the medical experts to perform classification. We achieve better structure preservation by adding auxiliary tasks of segmentation and direct reconstruction. Segmentation enforces that the network learns to generate correct nucleus and cytoplasm shape, while direct reconstruction enforces reliable translation between the matching images across domains. Besides, we build a robust domain agnostic latent space by injecting the target domain label directly to the generator, i.e., bypassing the encoder. It allows the encoder to extract features independently of the target domain and enables an automated domain invariant classification of the white blood cells. We validated our method on a large dataset composed of leukocytes of 24 patients, achieving state-of-the-art performance on both digital staining and classification tasks.
Collapse
|
29
|
Xing F, Cornish TC, Bennett TD, Ghosh D. Bidirectional Mapping-Based Domain Adaptation for Nucleus Detection in Cross-Modality Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2880-2896. [PMID: 33284750 PMCID: PMC8543886 DOI: 10.1109/tmi.2020.3042789] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Cell or nucleus detection is a fundamental task in microscopy image analysis and has recently achieved state-of-the-art performance by using deep neural networks. However, training supervised deep models such as convolutional neural networks (CNNs) usually requires sufficient annotated image data, which is prohibitively expensive or unavailable in some applications. Additionally, when applying a CNN to new datasets, it is common to annotate individual cells/nuclei in those target datasets for model re-learning, leading to inefficient and low-throughput image analysis. To tackle these problems, we present a bidirectional, adversarial domain adaptation method for nucleus detection on cross-modality microscopy image data. Specifically, the method learns a deep regression model for individual nucleus detection with both source-to-target and target-to-source image translation. In addition, we explicitly extend this unsupervised domain adaptation method to a semi-supervised learning situation and further boost the nucleus detection performance. We evaluate the proposed method on three cross-modality microscopy image datasets, which cover a wide variety of microscopy imaging protocols or modalities, and obtain a significant improvement in nucleus detection compared to reference baseline approaches. In addition, our semi-supervised method is very competitive with recent fully supervised learning models trained with all real target training labels.
Collapse
|
30
|
Binary domain adaptation with independence maximization. INT J MACH LEARN CYB 2021. [DOI: 10.1007/s13042-021-01339-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
31
|
Valkonen M, Hognas G, Bova GS, Ruusuvuori P. Generalized Fixation Invariant Nuclei Detection Through Domain Adaptation Based Deep Learning. IEEE J Biomed Health Inform 2021; 25:1747-1757. [PMID: 33211668 DOI: 10.1109/jbhi.2020.3039414] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Nucleus detection is a fundamental task in histological image analysis and an important tool for many follow up analyses. It is known that sample preparation and scanning procedure of histological slides introduce a great amount of variability to the histological images and poses challenges for automated nucleus detection. Here, we studied the effect of histopathological sample fixation on the accuracy of a deep learning based nuclei detection model trained with hematoxylin and eosin stained images. We experimented with training data that includes three methods of fixation; PAXgene, formalin and frozen, and studied the detection accuracy results of various convolutional neural networks. Our results indicate that the variability introduced during sample preparation affects the generalization of a model and should be considered when building accurate and robust nuclei detection algorithms. Our dataset includes over 67 000 annotated nuclei locations from 16 patients and three different sample fixation types. The dataset provides excellent basis for building an accurate and robust nuclei detection model, and combined with unsupervised domain adaptation, the workflow allows generalization to images from unseen domains, including different tissues and images from different labs.
Collapse
|
32
|
Sobhani F, Robinson R, Hamidinekoo A, Roxanis I, Somaiah N, Yuan Y. Artificial intelligence and digital pathology: Opportunities and implications for immuno-oncology. Biochim Biophys Acta Rev Cancer 2021; 1875:188520. [PMID: 33561505 PMCID: PMC9062980 DOI: 10.1016/j.bbcan.2021.188520] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 01/04/2021] [Accepted: 01/30/2021] [Indexed: 02/08/2023]
Abstract
The field of immuno-oncology has expanded rapidly over the past decade, but key questions remain. How does tumour-immune interaction regulate disease progression? How can we prospectively identify patients who will benefit from immunotherapy? Identifying measurable features of the tumour immune-microenvironment which have prognostic or predictive value will be key to making meaningful gains in these areas. Recent developments in deep learning enable big-data analysis of pathological samples. Digital approaches allow data to be acquired, integrated and analysed far beyond what is possible with conventional techniques, and to do so efficiently and at scale. This has the potential to reshape what can be achieved in terms of volume, precision and reliability of output, enabling data for large cohorts to be summarised and compared. This review examines applications of artificial intelligence (AI) to important questions in immuno-oncology (IO). We discuss general considerations that need to be taken into account before AI can be applied in any clinical setting. We describe AI methods that have been applied to the field of IO to date and present several examples of their use.
Collapse
Affiliation(s)
- Faranak Sobhani
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| | - Ruth Robinson
- Division of Radiotherapy and Imaging, Institute of Cancer Research, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Azam Hamidinekoo
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| | - Ioannis Roxanis
- The Breast Cancer Now Toby Robins Research Centre, The Institute of Cancer Research, London, UK.
| | - Navita Somaiah
- Division of Radiotherapy and Imaging, Institute of Cancer Research, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Yinyin Yuan
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| |
Collapse
|
33
|
Stacke K, Eilertsen G, Unger J, Lundstrom C. Measuring Domain Shift for Deep Learning in Histopathology. IEEE J Biomed Health Inform 2021; 25:325-336. [PMID: 33085623 DOI: 10.1109/jbhi.2020.3032060] [Citation(s) in RCA: 61] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The high capacity of neural networks allows fitting models to data with high precision, but makes generalization to unseen data a challenge. If a domain shift exists, i.e. differences in image statistics between training and test data, care needs to be taken to ensure reliable deployment in real-world scenarios. In digital pathology, domain shift can be manifested in differences between whole-slide images, introduced by for example differences in acquisition pipeline - between medical centers or over time. In order to harness the great potential presented by deep learning in histopathology, and ensure consistent model behavior, we need a deeper understanding of domain shift and its consequences, such that a model's predictions on new data can be trusted. This work focuses on the internal representation learned by trained convolutional neural networks, and shows how this can be used to formulate a novel measure - the representation shift - for quantifying the magnitude of model-specific domain shift. We perform a study on domain shift in tumor classification of hematoxylin and eosin stained images, by considering different datasets, models, and techniques for preparing data in order to reduce the domain shift. The results show how the proposed measure has a high correlation with drop in performance when testing a model across a large number of different types of domain shifts, and how it improves on existing techniques for measuring data shift and uncertainty. The proposed measure can reveal how sensitive a model is to domain variations, and can be used to detect new data that a model will have problems generalizing to. We see techniques for measuring, understanding and overcoming the domain shift as a crucial step towards reliable use of deep learning in the future clinical pathology applications.
Collapse
|
34
|
Ma B, Guo Y, Hu W, Yuan F, Zhu Z, Yu Y, Zou H. Artificial Intelligence-Based Multiclass Classification of Benign or Malignant Mucosal Lesions of the Stomach. Front Pharmacol 2020; 11:572372. [PMID: 33132910 PMCID: PMC7562716 DOI: 10.3389/fphar.2020.572372] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Accepted: 09/08/2020] [Indexed: 12/23/2022] Open
Abstract
Gastric cancer (GC) is one of the leading causes of cancer-related death worldwide. It takes some time from chronic gastritis to develop in GC. Early detection of GC will help patients obtain timely treatment. Understanding disease evolution is crucial for the prevention and treatment of GC. Here, we present a convolutional neural network (CNN)-based system to detect abnormalities in the gastric mucosa. We identified normal mucosa, chronic gastritis, and intestinal-type GC: this is the most common route of gastric carcinogenesis. We integrated digitalizing histopathology of whole-slide images (WSIs), stain normalization, a deep CNN, and a random forest classifier. The staining variability of WSIs was reduced significantly through stain normalization, and saved the cost and time of preparing new slides. Stain normalization improved the effect of the CNN model. The accuracy rate at the patch-level reached 98.4%, and 94.5% for discriminating normal → chronic gastritis → GC. The accuracy rate at the WSIs-level for discriminating normal tissue and cancerous tissue reached 96.0%, which is a state-of-the-art result. Survival analyses indicated that the features extracted from the CNN exerted a significant impact on predicting the survival of cancer patients. Our CNN model disclosed significant potential for adjuvant diagnosis of gastric diseases, especially GC, and usefulness for predicting the prognosis.
Collapse
Affiliation(s)
- Bowei Ma
- Center for Intelligent Medical Imaging & Health, Research Institute of Tsinghua University in Shenzhen, Shenzhen, China.,Tsimage Medical Technology, Yantian Modern Industry Service Center, Shenzhen, China
| | - Yucheng Guo
- Center for Intelligent Medical Imaging & Health, Research Institute of Tsinghua University in Shenzhen, Shenzhen, China.,Tsimage Medical Technology, Yantian Modern Industry Service Center, Shenzhen, China
| | - Weian Hu
- Tsimage Medical Technology, Yantian Modern Industry Service Center, Shenzhen, China
| | - Fei Yuan
- Department of Pathology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhenggang Zhu
- Department of General Surgery, Ruijin Hospital, Shanghai Institute of Digestive Surgery, Shanghai Key Lab for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yingyan Yu
- Department of General Surgery, Ruijin Hospital, Shanghai Institute of Digestive Surgery, Shanghai Key Lab for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Hao Zou
- Center for Intelligent Medical Imaging & Health, Research Institute of Tsinghua University in Shenzhen, Shenzhen, China.,Tsimage Medical Technology, Yantian Modern Industry Service Center, Shenzhen, China
| |
Collapse
|