1
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
2
|
Liu P, Ji L, Zhang X, Ye F. Pseudo-Bag Mixup Augmentation for Multiple Instance Learning-Based Whole Slide Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1841-1852. [PMID: 38194395 DOI: 10.1109/tmi.2024.3351213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
Given the special situation of modeling gigapixel images, multiple instance learning (MIL) has become one of the most important frameworks for Whole Slide Image (WSI) classification. In current practice, most MIL networks often face two unavoidable problems in training: i) insufficient WSI data and ii) the sample memorization inclination inherent in neural networks. These problems may hinder MIL models from adequate and efficient training, suppressing the continuous performance promotion of classification models on WSIs. Inspired by the basic idea of Mixup, this paper proposes a new Pseudo-bag Mixup (PseMix) data augmentation scheme to improve the training of MIL models. This scheme generalizes the Mixup strategy for general images to special WSIs via pseudo-bags so as to be applied in MIL-based WSI classification. Cooperated by pseudo-bags, our PseMix fulfills the critical size alignment and semantic alignment in Mixup strategy. Moreover, it is designed as an efficient and decoupled method, neither involving time-consuming operations nor relying on MIL model predictions. Comparative experiments and ablation studies are specially designed to evaluate the effectiveness and advantages of our PseMix. Experimental results show that PseMix could often assist state-of-the-art MIL networks to refresh their classification performance on WSIs. Besides, it could also boost the generalization performance of MIL models in special test scenarios, and promote their robustness to patch occlusion and label noise. Our source code is available at https://github.com/liupei101/PseMix.
Collapse
|
3
|
Yuan L, An L, Zhu Y, Duan C, Kong W, Jiang P, Yu QQ. Machine Learning in Diagnosis and Prognosis of Lung Cancer by PET-CT. Cancer Manag Res 2024; 16:361-375. [PMID: 38699652 PMCID: PMC11063459 DOI: 10.2147/cmar.s451871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 04/16/2024] [Indexed: 05/05/2024] Open
Abstract
As a disease with high morbidity and high mortality, lung cancer has seriously harmed people's health. Therefore, early diagnosis and treatment are more important. PET/CT is usually used to obtain the early diagnosis, staging, and curative effect evaluation of tumors, especially lung cancer, due to the heterogeneity of tumors and the differences in artificial image interpretation and other reasons, it also fails to entirely reflect the real situation of tumors. Artificial intelligence (AI) has been applied to all aspects of life. Machine learning (ML) is one of the important ways to realize AI. With the help of the ML method used by PET/CT imaging technology, there are many studies in the diagnosis and treatment of lung cancer. This article summarizes the application progress of ML based on PET/CT in lung cancer, in order to better serve the clinical. In this study, we searched PubMed using machine learning, lung cancer, and PET/CT as keywords to find relevant articles in the past 5 years or more. We found that PET/CT-based ML approaches have achieved significant results in the detection, delineation, classification of pathology, molecular subtyping, staging, and response assessment with survival and prognosis of lung cancer, which can provide clinicians a powerful tool to support and assist in critical daily clinical decisions. However, ML has some shortcomings such as slightly poor repeatability and reliability.
Collapse
Affiliation(s)
- Lili Yuan
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Lin An
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Yandong Zhu
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Chongling Duan
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Weixiang Kong
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Pei Jiang
- Translational Pharmaceutical Laboratory, Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Qing-Qing Yu
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| |
Collapse
|
4
|
Wu W, Gao C, DiPalma J, Vosoughi S, Hassanpour S. Improving Representation Learning for Histopathologic Images with Cluster Constraints. PROCEEDINGS. IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION 2024; 2023:21347-21357. [PMID: 38694561 PMCID: PMC11062482 DOI: 10.1109/iccv51070.2023.01957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2024]
Abstract
Recent advances in whole-slide image (WSI) scanners and computational capabilities have significantly propelled the application of artificial intelligence in histopathology slide analysis. While these strides are promising, current supervised learning approaches for WSI analysis come with the challenge of exhaustively labeling high-resolution slides-a process that is both labor-intensive and timeconsuming. In contrast, self-supervised learning (SSL) pretraining strategies are emerging as a viable alternative, given that they don't rely on explicit data annotations. These SSL strategies are quickly bridging the performance disparity with their supervised counterparts. In this context, we introduce an SSL framework. This framework aims for transferable representation learning and semantically meaningful clustering by synergizing invariance loss and clustering loss in WSI analysis. Notably, our approach outperforms common SSL methods in downstream classification and clustering tasks, as evidenced by tests on the Camelyon16 and a pancreatic cancer dataset. The code and additional details are accessible at https://github.com/wwyi1828/CluSiam.
Collapse
|
5
|
Lin TP, Yang CY, Liu KJ, Huang MY, Chen YL. Immunohistochemical Stain-Aided Annotation Accelerates Machine Learning and Deep Learning Model Development in the Pathologic Diagnosis of Nasopharyngeal Carcinoma. Diagnostics (Basel) 2023; 13:3685. [PMID: 38132269 PMCID: PMC10743164 DOI: 10.3390/diagnostics13243685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 12/04/2023] [Accepted: 12/05/2023] [Indexed: 12/23/2023] Open
Abstract
Nasopharyngeal carcinoma (NPC) is an epithelial cancer originating in the nasopharynx epithelium. Nevertheless, annotating pathology slides remains a bottleneck in the development of AI-driven pathology models and applications. In the present study, we aim to demonstrate the feasibility of using immunohistochemistry (IHC) for annotation by non-pathologists and to develop an efficient model for distinguishing NPC without the time-consuming involvement of pathologists. For this study, we gathered NPC slides from 251 different patients, comprising hematoxylin and eosin (H&E) slides, pan-cytokeratin (Pan-CK) IHC slides, and Epstein-Barr virus-encoded small RNA (EBER) slides. The annotation of NPC regions in the H&E slides was carried out by a non-pathologist trainee who had access to corresponding Pan-CK IHC slides, both with and without EBER slides. The training process utilized ResNeXt, a deep neural network featuring a residual and inception architecture. In the validation set, NPC exhibited an AUC of 0.896, with a sensitivity of 0.919 and a specificity of 0.878. This study represents a significant breakthrough: the successful application of deep convolutional neural networks to identify NPC without the need for expert pathologist annotations. Our results underscore the potential of laboratory techniques to substantially reduce the workload of pathologists.
Collapse
Affiliation(s)
- Tai-Pei Lin
- Department of Life Sciences, National Chung Hsing University, Taichung 402, Taiwan;
| | - Chiou-Ying Yang
- Institute of Molecular Biology, National Chung Hsing University, Taichung 402, Taiwan;
| | - Ko-Jiunn Liu
- National Institute of Cancer Research, National Health Research Institutes, Tainan 704, Taiwan;
- Graduate Institute of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung 807, Taiwan
- Institute of Clinical Pharmacy and Pharmaceutical Sciences and Institute of Clinical Medicine, National Cheng Kung University, Tainan 701, Taiwan
| | - Meng-Yuan Huang
- Department of Life Sciences, National Chung Hsing University, Taichung 402, Taiwan;
| | - Yen-Lin Chen
- Department of Pathology, Tri-Service General Hospital, National Defense Medical Center, Taipei 114, Taiwan
| |
Collapse
|
6
|
Zhai T, Wang H. Online Passive-Aggressive Multilabel Classification Algorithms. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10116-10129. [PMID: 35436199 DOI: 10.1109/tnnls.2022.3164906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Most existing multilabel classification methods are batch learning methods, which may suffer from expensive retraining costs when dealing with new incoming data. In order to overcome the drawbacks of batch learning, we develop a family of online multilabel classification algorithms, which can update the model instantly and efficiently, and make a timely online prediction when new data arrive. Our algorithms all take a closed-form update, which is obtained by solving a constrained optimization problem in each round of online learning. Label correlation is explicitly modeled in our optimization problem. The label thresholding function, an important component of our online classifier, can also be learned online. Our algorithms can be easily generalized to the nonlinear prediction cases using Mercer kernels. The worst case loss bounds for our algorithms are provided. The bounds are relative to the cumulative loss suffered by the best fixed predictive model that can be attained in hindsight. Finally, we corroborate the merits of our algorithms in both linear and nonlinear predictions on nine open multilabel benchmark datasets.
Collapse
|
7
|
Yang Y, Guo X, Ye C, Xiang Y, Ma T. CReg-KD: Model refinement via confidence regularized knowledge distillation for brain imaging. Med Image Anal 2023; 89:102916. [PMID: 37549611 DOI: 10.1016/j.media.2023.102916] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 07/10/2023] [Accepted: 07/25/2023] [Indexed: 08/09/2023]
Abstract
One of the core challenges of deep learning in medical image analysis is data insufficiency, especially for 3D brain imaging, which may lead to model over-fitting and poor generalization. Regularization strategies such as knowledge distillation are powerful tools to mitigate the issue by penalizing predictive distributions and introducing additional knowledge to reinforce the training process. In this paper, we revisit knowledge distillation as a regularization paradigm by penalizing attentive output distributions and intermediate representations. In particular, we propose a Confidence Regularized Knowledge Distillation (CReg-KD) framework, which adaptively transfers knowledge for distillation in light of knowledge confidence. Two strategies are advocated to regularize the global and local dependencies between teacher and student knowledge. In detail, a gated distillation mechanism is proposed to soften the transferred knowledge globally by utilizing the teacher loss as a confidence score. Moreover, the intermediate representations are attentively and locally refined with key semantic context to mimic meaningful features. To demonstrate the superiority of our proposed framework, we evaluated the framework on two brain imaging analysis tasks (i.e. Alzheimer's Disease classification and brain age estimation based on T1-weighted MRI) on the Alzheimer's Disease Neuroimaging Initiative dataset including 902 subjects and a cohort of 3655 subjects from 4 public datasets. Extensive experimental results show that CReg-KD achieves consistent improvements over the baseline teacher model and outperforms other state-of-the-art knowledge distillation approaches, manifesting that CReg-KD as a powerful medical image analysis tool in terms of both promising prediction performance and generalizability.
Collapse
Affiliation(s)
- Yanwu Yang
- Electronic & Information Engineering School, Harbin Institute of Technology (Shenzhen), Shenzhen, China; Peng Cheng Laboratory, Shenzhen, China.
| | - Xutao Guo
- Electronic & Information Engineering School, Harbin Institute of Technology (Shenzhen), Shenzhen, China; Peng Cheng Laboratory, Shenzhen, China.
| | - Chenfei Ye
- Peng Cheng Laboratory, Shenzhen, China; International Research Institute for Artificial Intelligence, Harbin Institute of Technology (Shenzhen), Shenzhen, China.
| | | | - Ting Ma
- Electronic & Information Engineering School, Harbin Institute of Technology (Shenzhen), Shenzhen, China; Peng Cheng Laboratory, Shenzhen, China; Guangdong Provincial Key Laboratory of Aerospace Communication and Networking Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, China; International Research Institute for Artificial Intelligence, Harbin Institute of Technology (Shenzhen), Shenzhen, China.
| |
Collapse
|
8
|
Zheng Y, Li J, Shi J, Xie F, Huai J, Cao M, Jiang Z. Kernel Attention Transformer for Histopathology Whole Slide Image Analysis and Assistant Cancer Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2726-2739. [PMID: 37018112 DOI: 10.1109/tmi.2023.3264781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Transformer has been widely used in histopathology whole slide image analysis. However, the design of token-wise self-attention and positional embedding strategy in the common Transformer limits its effectiveness and efficiency when applied to gigapixel histopathology images. In this paper, we propose a novel kernel attention Transformer (KAT) for histopathology WSI analysis and assistant cancer diagnosis. The information transmission in KAT is achieved by cross-attention between the patch features and a set of kernels related to the spatial relationship of the patches on the whole slide images. Compared to the common Transformer structure, KAT can extract the hierarchical context information of the local regions of the WSI and provide diversified diagnosis information. Meanwhile, the kernel-based cross-attention paradigm significantly reduces the computational amount. The proposed method was evaluated on three large-scale datasets and was compared with 8 state-of-the-art methods. The experimental results have demonstrated the proposed KAT is effective and efficient in the task of histopathology WSI analysis and is superior to the state-of-the-art methods.
Collapse
|
9
|
Xu H, Wu A, Ren H, Yu C, Liu G, Liu L. Classification of colorectal cancer consensus molecular subtypes using attention-based multi-instance learning network on whole-slide images. Acta Histochem 2023; 125:152057. [PMID: 37300984 DOI: 10.1016/j.acthis.2023.152057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 06/01/2023] [Indexed: 06/12/2023]
Abstract
Colorectal cancer (CRC) is the third most common and second most lethal cancer globally. It is highly heterogeneous with different clinical-pathological characteristics, prognostic status, and therapy responses. Thus, the precise diagnosis of CRC subtypes is of great significance for improving the prognosis and survival of CRC patients. Nowadays, the most commonly used molecular-level CRC classification system is the Consensus Molecular Subtypes (CMSs). In this study, we applied a weakly supervised deep learning method, named attention-based multi-instance learning (MIL), on formalin-fixed paraffin-embedded (FFPE) whole-slide images (WSIs) to distinguish CMS1 subtype from CMS2, CMS3, and CMS4 subtypes, as well as distinguish CMS4 from CMS1, CMS2, and CMS3 subtypes. The advantage of MIL is training a bag of the tiled instance with bag-level labels only. Our experiment was performed on 1218 WSIs obtained from The Cancer Genome Atlas (TCGA). We constructed three convolutional neural network-based structures for model training and evaluated the ability of the max-pooling operator and mean-pooling operator on aggregating bag-level scores. The results showed that the 3-layer model achieved the best performance in both comparison groups. When compared CMS1 with CMS234, max-pooling reached the ACC of 83.86 % and the mean-pooling operator reached the AUC of 0.731. While comparing CMS4 with CMS123, mean-pooling reached the ACC of 74.26 % and max-pooling reached the AUC of 0.609. Our results implied that WSIs could be utilized to classify CMSs, and manual pixel-level annotation is not a necessity for computational pathology imaging analysis.
Collapse
Affiliation(s)
- Huilin Xu
- Institutes of Biomedical Sciences and Intelligent Medicine Institute, Fudan University, Shanghai 200032, China
| | - Aoshen Wu
- Institutes of Biomedical Sciences and Intelligent Medicine Institute, Fudan University, Shanghai 200032, China
| | - He Ren
- Faculty of Medical Instrumentation, Shanghai University of Medicine and Health Sciences, Shanghai 201318, China
| | - Chenghang Yu
- National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention (Chinese Center for Tropical Diseases Research), Key Laboratory of Parasite and Vector Biology, National Health Commission of the People's Republic of China, WHO Collaborating Center for Tropical Diseases, Shanghai 200025, China
| | - Gang Liu
- Institutes of Biomedical Sciences and Intelligent Medicine Institute, Fudan University, Shanghai 200032, China.
| | - Lei Liu
- Institutes of Biomedical Sciences and Intelligent Medicine Institute, Fudan University, Shanghai 200032, China.
| |
Collapse
|
10
|
DiPalma J, Torresani L, Hassanpour S. HistoPerm: A permutation-based view generation approach for improving histopathologic feature representation learning. J Pathol Inform 2023; 14:100320. [PMID: 37457594 PMCID: PMC10339175 DOI: 10.1016/j.jpi.2023.100320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 06/23/2023] [Accepted: 06/28/2023] [Indexed: 07/18/2023] Open
Abstract
Deep learning has been effective for histology image analysis in digital pathology. However, many current deep learning approaches require large, strongly- or weakly labeled images and regions of interest, which can be time-consuming and resource-intensive to obtain. To address this challenge, we present HistoPerm, a view generation method for representation learning using joint embedding architectures that enhances representation learning for histology images. HistoPerm permutes augmented views of patches extracted from whole-slide histology images to improve classification performance. We evaluated the effectiveness of HistoPerm on 2 histology image datasets for Celiac disease and Renal Cell Carcinoma, using 3 widely used joint embedding architecture-based representation learning methods: BYOL, SimCLR, and VICReg. Our results show that HistoPerm consistently improves patch- and slide-level classification performance in terms of accuracy, F1-score, and AUC. Specifically, for patch-level classification accuracy on the Celiac disease dataset, HistoPerm boosts BYOL and VICReg by 8% and SimCLR by 3%. On the Renal Cell Carcinoma dataset, patch-level classification accuracy is increased by 2% for BYOL and VICReg, and by 1% for SimCLR. In addition, on the Celiac disease dataset, models with HistoPerm outperform the fully supervised baseline model by 6%, 5%, and 2% for BYOL, SimCLR, and VICReg, respectively. For the Renal Cell Carcinoma dataset, HistoPerm lowers the classification accuracy gap for the models up to 10% relative to the fully supervised baseline. These findings suggest that HistoPerm can be a valuable tool for improving representation learning of histopathology features when access to labeled data is limited and can lead to whole-slide classification results that are comparable to or superior to fully supervised methods.
Collapse
Affiliation(s)
- Joseph DiPalma
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Lorenzo Torresani
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Saeed Hassanpour
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
- Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
| |
Collapse
|
11
|
Hashimoto N, Takagi Y, Masuda H, Miyoshi H, Kohno K, Nagaishi M, Sato K, Takeuchi M, Furuta T, Kawamoto K, Yamada K, Moritsubo M, Inoue K, Shimasaki Y, Ogura Y, Imamoto T, Mishina T, Tanaka K, Kawaguchi Y, Nakamura S, Ohshima K, Hontani H, Takeuchi I. Case-based similar image retrieval for weakly annotated large histopathological images of malignant lymphoma using deep metric learning. Med Image Anal 2023; 85:102752. [PMID: 36716701 DOI: 10.1016/j.media.2023.102752] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 12/29/2022] [Accepted: 01/18/2023] [Indexed: 01/26/2023]
Abstract
In the present study, we propose a novel case-based similar image retrieval (SIR) method for hematoxylin and eosin (H&E) stained histopathological images of malignant lymphoma. When a whole slide image (WSI) is used as an input query, it is desirable to be able to retrieve similar cases by focusing on image patches in pathologically important regions such as tumor cells. To address this problem, we employ attention-based multiple instance learning, which enables us to focus on tumor-specific regions when the similarity between cases is computed. Moreover, we employ contrastive distance metric learning to incorporate immunohistochemical (IHC) staining patterns as useful supervised information for defining appropriate similarity between heterogeneous malignant lymphoma cases. In the experiment with 249 malignant lymphoma patients, we confirmed that the proposed method exhibited higher evaluation measures than the baseline case-based SIR methods. Furthermore, the subjective evaluation by pathologists revealed that our similarity measure using IHC staining patterns is appropriate for representing the similarity of H&E stained tissue images for malignant lymphoma.
Collapse
Affiliation(s)
- Noriaki Hashimoto
- RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Yusuke Takagi
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
| | - Hiroki Masuda
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
| | - Hiroaki Miyoshi
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Kei Kohno
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Miharu Nagaishi
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Kensaku Sato
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Mai Takeuchi
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Takuya Furuta
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Keisuke Kawamoto
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Kyohei Yamada
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Mayuko Moritsubo
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Kanako Inoue
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Yasumasa Shimasaki
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Yusuke Ogura
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Teppei Imamoto
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Tatsuzo Mishina
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Ken Tanaka
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Yoshino Kawaguchi
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Shigeo Nakamura
- Department of Pathology and Laboratory Medicine, Nagoya University Hospital, 65 Tsurumai-cho, Showa-ku, Nagoya 466-8560, Japan
| | - Koichi Ohshima
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Hidekata Hontani
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
| | - Ichiro Takeuchi
- RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; Department of Mechanical Systems Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan.
| |
Collapse
|
12
|
Intelligent Diagnosis of Multiple Peripheral Retinal Lesions in Ultra-widefield Fundus Images Based on Deep Learning. Ophthalmol Ther 2023; 12:1081-1095. [PMID: 36692813 PMCID: PMC9872743 DOI: 10.1007/s40123-023-00651-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 01/05/2023] [Indexed: 01/25/2023] Open
Abstract
INTRODUCTION Compared with traditional fundus examination techniques, ultra-widefield fundus (UWF) images provide 200° panoramic images of the retina, which allows better detection of peripheral retinal lesions. The advent of UWF provides effective solutions only for detection but still lacks efficient diagnostic capabilities. This study proposed a retinal lesion detection model to automatically locate and identify six relatively typical and high-incidence peripheral retinal lesions from UWF images which will enable early screening and rapid diagnosis. METHODS A total of 24,602 augmented ultra-widefield fundus images with labels corresponding to 6 peripheral retinal lesions and normal manifestation labelled by 5 ophthalmologists were included in this study. An object detection model named You Only Look Once X (YOLOX) was modified and trained to locate and classify the six peripheral retinal lesions including rhegmatogenous retinal detachment (RRD), retinal breaks (RB), white without pressure (WWOP), cystic retinal tuft (CRT), lattice degeneration (LD), and paving-stone degeneration (PSD). We applied coordinate attention block and generalized intersection over union (GIOU) loss to YOLOX and evaluated it for accuracy, sensitivity, specificity, precision, F1 score, and average precision (AP). This model was able to show the exact location and saliency map of the retinal lesions detected by the model thus contributing to efficient screening and diagnosis. RESULTS The model reached an average accuracy of 96.64%, sensitivity of 87.97%, specificity of 98.04%, precision of 87.01%, F1 score of 87.39%, and mAP of 86.03% on test dataset 1 including 248 UWF images and reached an average accuracy of 95.04%, sensitivity of 83.90%, specificity of 96.70%, precision of 78.73%, F1 score of 81.96%, and mAP of 80.59% on external test dataset 2 including 586 UWF images, showing this system performs well in distinguishing the six peripheral retinal lesions. CONCLUSION Focusing on peripheral retinal lesions, this work proposed a deep learning model, which automatically recognized multiple peripheral retinal lesions from UWF images and localized exact positions of lesions. Therefore, it has certain potential for early screening and intelligent diagnosis of peripheral retinal lesions.
Collapse
|
13
|
Bozdag Z, Talu MF. Pyramidal position attention model for histopathological image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
14
|
Chen Y, Dong Y, Si L, Yang W, Du S, Tian X, Li C, Liao Q, Ma H. Dual Polarization Modality Fusion Network for Assisting Pathological Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:304-316. [PMID: 36155433 DOI: 10.1109/tmi.2022.3210113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Polarization imaging is sensitive to sub-wavelength microstructures of various cancer tissues, providing abundant optical characteristics and microstructure information of complex pathological specimens. However, how to reasonably utilize polarization information to strengthen pathological diagnosis ability remains a challenging issue. In order to take full advantage of pathological image information and polarization features of samples, we propose a dual polarization modality fusion network (DPMFNet), which consists of a multi-stream CNN structure and a switched attention fusion module for complementarily aggregating the features from different modality images. Our proposed switched attention mechanism could obtain the joint feature embeddings by switching the attention map of different modality images to improve their semantic relatedness. By including a dual-polarization contrastive training scheme, our method can synthesize and align the interaction and representation of two polarization features. Experimental evaluations on three cancer datasets show the superiority of our method in assisting pathological diagnosis, especially in small datasets and low imaging resolution cases. Grad-CAM visualizes the important regions of the pathological images and the polarization images, indicating that the two modalities play different roles and allow us to give insightful corresponding explanations and analysis on cancer diagnosis conducted by the DPMFNet. This technique has potential to facilitate the performance of pathological aided diagnosis and broaden the current digital pathology boundary based on pathological image features.
Collapse
|
15
|
Gao Z, Hong B, Li Y, Zhang X, Wu J, Wang C, Zhang X, Gong T, Zheng Y, Meng D, Li C. A semi-supervised multi-task learning framework for cancer classification with weak annotation in whole-slide images. Med Image Anal 2023; 83:102652. [PMID: 36327654 DOI: 10.1016/j.media.2022.102652] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 09/15/2022] [Accepted: 10/08/2022] [Indexed: 11/06/2022]
Abstract
Cancer region detection (CRD) and subtyping are two fundamental tasks in digital pathology image analysis. The development of data-driven models for CRD and subtyping on whole-slide images (WSIs) would mitigate the burden of pathologists and improve their accuracy in diagnosis. However, the existing models are facing two major limitations. Firstly, they typically require large-scale datasets with precise annotations, which contradicts with the original intention of reducing labor effort. Secondly, for the subtyping task, the non-cancerous regions are treated as the same as cancerous regions within a WSI, which confuses a subtyping model in its training process. To tackle the latter limitation, the previous research proposed to perform CRD first for ruling out the non-cancerous region, then train a subtyping model based on the remaining cancerous patches. However, separately training ignores the interaction of these two tasks, also leads to propagating the error of the CRD task to the subtyping task. To address these issues and concurrently improve the performance on both CRD and subtyping tasks, we propose a semi-supervised multi-task learning (MTL) framework for cancer classification. Our framework consists of a backbone feature extractor, two task-specific classifiers, and a weight control mechanism. The backbone feature extractor is shared by two task-specific classifiers, such that the interaction of CRD and subtyping tasks can be captured. The weight control mechanism preserves the sequential relationship of these two tasks and guarantees the error back-propagation from the subtyping task to the CRD task under the MTL framework. We train the overall framework in a semi-supervised setting, where datasets only involve small quantities of annotations produced by our minimal point-based (min-point) annotation strategy. Extensive experiments on four large datasets with different cancer types demonstrate the effectiveness of the proposed framework in both accuracy and generalization.
Collapse
Affiliation(s)
- Zeyu Gao
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Bangyang Hong
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Yang Li
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Xianli Zhang
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Jialun Wu
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Chunbao Wang
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Department of Pathology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an 710061, China
| | - Xiangrong Zhang
- School of Artificial Intelligence, Xidian University, Xi'an 710071, China
| | - Tieliang Gong
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Yefeng Zheng
- Tencent Jarvis Lab, Shenzhen, Guangdong 518075, China
| | - Deyu Meng
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China
| | - Chen Li
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China.
| |
Collapse
|
16
|
Yang Y, Liu Z, Huang J, Sun X, Ao J, Zheng B, Chen W, Shao Z, Hu H, Yang Y, Ji M. Histological diagnosis of unprocessed breast core-needle biopsy via stimulated Raman scattering microscopy and multi-instance learning. Theranostics 2023; 13:1342-1354. [PMID: 36923541 PMCID: PMC10008736 DOI: 10.7150/thno.81784] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 01/09/2023] [Indexed: 03/14/2023] Open
Abstract
Core-needle biopsy (CNB) plays a vital role in the initial diagnosis of breast cancer. However, the complex tissue processing and global shortage of pathologists have hindered traditional histopathology from timely diagnosis on fresh biopsies. In this work, we developed a full digital platform by integrating label-free stimulated Raman scattering (SRS) microscopy with weakly-supervised learning for rapid and automated cancer diagnosis on un-labelled breast CNB. Methods: We first compared the results of SRS imaging with standard hematoxylin and eosin (H&E) staining on adjacent frozen tissue sections. Then fresh unprocessed biopsy tissues were imaged by SRS to reveal diagnostic histoarchitectures. Next, weakly-supervised learning, i.e., the multi-instance learning (MIL) model was conducted to evaluate the ability to differentiate between benign and malignant cases, and compared with the performance of supervised learning model. Finally, gradient-weighted class activation mapping (Grad-CAM) and semantic segmentation were performed to spatially resolve benign/malignant areas with high efficiency. Results: We verified the ability of SRS in revealing essential histological hallmarks of breast cancer in both thin frozen sections and fresh unprocessed biopsy, generating histoarchitectures well correlated with H&E staining. Moreover, we demonstrated that weakly-supervised MIL model could achieve superior classification performance to supervised learnings, reaching diagnostic accuracy of 95% on 61 biopsy specimens. Furthermore, Grad-CAM allowed the trained MIL model to visualize the histological heterogeneity within the CNB. Conclusion: Our results indicate that MIL-assisted SRS microscopy provides rapid and accurate diagnosis on histologically heterogeneous breast CNB, and could potentially help the subsequent management of patients.
Collapse
Affiliation(s)
- Yifan Yang
- State Key Laboratory of Surface Physics and Department of Physics, Human Phenome Institute, Academy for Engineering and Technology, Key Laboratory of Micro and Nano Photonic Structures (Ministry of Education), Yiwu Research Institute, Fudan University, Shanghai 200433, China
| | - Zhijie Liu
- State Key Laboratory of Surface Physics and Department of Physics, Human Phenome Institute, Academy for Engineering and Technology, Key Laboratory of Micro and Nano Photonic Structures (Ministry of Education), Yiwu Research Institute, Fudan University, Shanghai 200433, China
| | - Jing Huang
- State Key Laboratory of Surface Physics and Department of Physics, Human Phenome Institute, Academy for Engineering and Technology, Key Laboratory of Micro and Nano Photonic Structures (Ministry of Education), Yiwu Research Institute, Fudan University, Shanghai 200433, China
| | - Xiangjie Sun
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai 200032, China
| | - Jianpeng Ao
- State Key Laboratory of Surface Physics and Department of Physics, Human Phenome Institute, Academy for Engineering and Technology, Key Laboratory of Micro and Nano Photonic Structures (Ministry of Education), Yiwu Research Institute, Fudan University, Shanghai 200433, China
| | - Bin Zheng
- Otolaryngology & Head and Neck Center, Cancer Center, Department of Otolaryngology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital, Hangzhou Medical College, Hangzhou, Zhejiang, China
| | - Wanyuan Chen
- Cancer Center, Department of Pathology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital, Hangzhou Medical College, Hangzhou, Zhejiang, China
| | - Zhiming Shao
- Department of Breast Surgery, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Hao Hu
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai, 200032 China
| | - Yinlong Yang
- Department of Breast Surgery, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Minbiao Ji
- State Key Laboratory of Surface Physics and Department of Physics, Human Phenome Institute, Academy for Engineering and Technology, Key Laboratory of Micro and Nano Photonic Structures (Ministry of Education), Yiwu Research Institute, Fudan University, Shanghai 200433, China
| |
Collapse
|
17
|
Wei T, Aviles-Rivero AI, Wang S, Huang Y, Gilbert FJ, Schönlieb CB, Chen CW. Beyond fine-tuning: Classifying high resolution mammograms using function-preserving transformations. Med Image Anal 2022; 82:102618. [PMID: 36183607 DOI: 10.1016/j.media.2022.102618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 08/03/2022] [Accepted: 09/02/2022] [Indexed: 11/15/2022]
Abstract
The task of classifying mammograms is very challenging because the lesion is usually small in the high resolution image. The current state-of-the-art approaches for medical image classification rely on using the de-facto method for convolutional neural networks-fine-tuning. However, there are fundamental differences between natural images and medical images, which based on existing evidence from the literature, limits the overall performance gain when designed with algorithmic approaches. In this paper, we propose to go beyond fine-tuning by introducing a novel framework called MorphHR, in which we highlight a new transfer learning scheme. The idea behind the proposed framework is to integrate function-preserving transformations, for any continuous non-linear activation neurons, to internally regularise the network for improving mammograms classification. The proposed solution offers two major advantages over the existing techniques. Firstly and unlike fine-tuning, the proposed approach allows for modifying not only the last few layers but also several of the first ones on a deep ConvNet. By doing this, we can design the network front to be suitable for learning domain specific features. Secondly, the proposed scheme is scalable to hardware. Therefore, one can fit high resolution images on standard GPU memory. We show that by using high resolution images, one prevents losing relevant information. We demonstrate, through numerical and visual experiments, that the proposed approach yields to a significant improvement in the classification performance over state-of-the-art techniques, and is indeed on a par with radiology experts. Moreover and for generalisation purposes, we show the effectiveness of the proposed learning scheme on another large dataset, the ChestX-ray14, surpassing current state-of-the-art techniques.
Collapse
Affiliation(s)
- Tao Wei
- The Department of Computer Science, State University of New York at Buffalo, NY, USA
| | | | - Shuo Wang
- The Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China; Shanghai Key Laboratory of MICCAI, Shanghai, China
| | - Yuan Huang
- The Department of Radiology, University of Cambridge, UK
| | | | | | - Chang Wen Chen
- The Department of Computer Science, State University of New York at Buffalo, NY, USA
| |
Collapse
|
18
|
Ahmed AA, Abouzid M, Kaczmarek E. Deep Learning Approaches in Histopathology. Cancers (Basel) 2022; 14:5264. [PMID: 36358683 PMCID: PMC9654172 DOI: 10.3390/cancers14215264] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 10/10/2022] [Accepted: 10/24/2022] [Indexed: 10/06/2023] Open
Abstract
The revolution of artificial intelligence and its impacts on our daily life has led to tremendous interest in the field and its related subtypes: machine learning and deep learning. Scientists and developers have designed machine learning- and deep learning-based algorithms to perform various tasks related to tumor pathologies, such as tumor detection, classification, grading with variant stages, diagnostic forecasting, recognition of pathological attributes, pathogenesis, and genomic mutations. Pathologists are interested in artificial intelligence to improve the diagnosis precision impartiality and to minimize the workload combined with the time consumed, which affects the accuracy of the decision taken. Regrettably, there are already certain obstacles to overcome connected to artificial intelligence deployments, such as the applicability and validation of algorithms and computational technologies, in addition to the ability to train pathologists and doctors to use these machines and their willingness to accept the results. This review paper provides a survey of how machine learning and deep learning methods could be implemented into health care providers' routine tasks and the obstacles and opportunities for artificial intelligence application in tumor morphology.
Collapse
Affiliation(s)
- Alhassan Ali Ahmed
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| | - Mohamed Abouzid
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Department of Physical Pharmacy and Pharmacokinetics, Faculty of Pharmacy, Poznan University of Medical Sciences, Rokietnicka 3 St., 60-806 Poznan, Poland
| | - Elżbieta Kaczmarek
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| |
Collapse
|
19
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
20
|
Shen Z, Hu J, Wu H, Chen Z, Wu W, Lin J, Xu Z, Kong J, Lin T. Global research trends and foci of artificial intelligence-based tumor pathology: a scientometric study. Lab Invest 2022; 20:409. [PMID: 36068536 PMCID: PMC9450455 DOI: 10.1186/s12967-022-03615-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 08/25/2022] [Indexed: 02/08/2023]
Abstract
Background With the development of digital pathology and the renewal of deep learning algorithm, artificial intelligence (AI) is widely applied in tumor pathology. Previous researches have demonstrated that AI-based tumor pathology may help to solve the challenges faced by traditional pathology. This technology has attracted the attention of scholars in many fields and a large amount of articles have been published. This study mainly summarizes the knowledge structure of AI-based tumor pathology through bibliometric analysis, and discusses the potential research trends and foci. Methods Publications related to AI-based tumor pathology from 1999 to 2021 were selected from Web of Science Core Collection. VOSviewer and Citespace were mainly used to perform and visualize co-authorship, co-citation, and co-occurrence analysis of countries, institutions, authors, references and keywords in this field. Results A total of 2753 papers were included. The papers on AI-based tumor pathology research had been continuously increased since 1999. The United States made the largest contribution in this field, in terms of publications (1138, 41.34%), H-index (85) and total citations (35,539 times). We identified the most productive institution and author were Harvard Medical School and Madabhushi Anant, while Jemal Ahmedin was the most co-cited author. Scientific Reports was the most prominent journal and after analysis, Lecture Notes in Computer Science was the journal with highest total link strength. According to the result of references and keywords analysis, “breast cancer histopathology” “convolutional neural network” and “histopathological image” were identified as the major future research foci. Conclusions AI-based tumor pathology is in the stage of vigorous development and has a bright prospect. International transboundary cooperation among countries and institutions should be strengthened in the future. It is foreseeable that more research foci will be lied in the interpretability of deep learning-based model and the development of multi-modal fusion model. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-022-03615-0.
Collapse
Affiliation(s)
- Zefeng Shen
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Jintao Hu
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Haiyang Wu
- Graduate School of Tianjin Medical University, No. 22 Qixiangtai Road, Tianjin, 300070, China
| | - Zeshi Chen
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Weixia Wu
- Zhujiang Hospital, Southern Medical University, 253 Gongye Road M, Guangzhou, 510282, China
| | - Junyi Lin
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Zixin Xu
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Jianqiu Kong
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China. .,Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou, China.
| | - Tianxin Lin
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China. .,Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou, China.
| |
Collapse
|
21
|
Wu H, Pang KKY, Pang GKH, Au-Yeung RKH. A soft-computing based approach to overlapped cells analysis in histopathology images with genetic algorithm. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
22
|
Lou X, Zhou N, Feng L, Li Z, Fang Y, Fan X, Ling Y, Liu H, Zou X, Wang J, Huang J, Yun J, Yao J, Huang Y. Deep Learning Model for Predicting the Pathological Complete Response to Neoadjuvant Chemoradiotherapy of Locally Advanced Rectal Cancer. Front Oncol 2022; 12:807264. [PMID: 35756653 PMCID: PMC9214314 DOI: 10.3389/fonc.2022.807264] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 05/02/2022] [Indexed: 12/24/2022] Open
Abstract
Objective This study aimed to develop an artificial intelligence model for predicting the pathological complete response (pCR) to neoadjuvant chemoradiotherapy (nCRT) of locally advanced rectal cancer (LARC) using digital pathological images. Background nCRT followed by total mesorectal excision (TME) is a standard treatment strategy for patients with LARC. Predicting the PCR to nCRT of LARC remine difficulty. Methods 842 LARC patients treated with standard nCRT from three medical centers were retrospectively recruited and subgrouped into the training, testing and external validation sets. Treatment response was classified as pCR and non-pCR based on the pathological diagnosis after surgery as the ground truth. The hematoxylin & eosin (H&E)-stained biopsy slides were manually annotated and used to develop a deep pathological complete response (DeepPCR) prediction model by deep learning. Results The proposed DeepPCR model achieved an AUC-ROC of 0.710 (95% CI: 0.595, 0.808) in the testing cohort. Similarly, in the external validation cohort, the DeepPCR model achieved an AUC-ROC of 0.723 (95% CI: 0.591, 0.844). The sensitivity and specificity of the DeepPCR model were 72.6% and 46.9% in the testing set and 72.5% and 62.7% in the external validation cohort, respectively. Multivariate logistic regression analysis showed that the DeepPCR model was an independent predictive factor of nCRT (P=0.008 and P=0.004 for the testing set and external validation set, respectively). Conclusions The DeepPCR model showed high accuracy in predicting pCR and served as an independent predictive factor for pCR. The model can be used to assist in clinical treatment decision making before surgery.
Collapse
Affiliation(s)
- Xiaoying Lou
- Department of Pathology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China.,Guangdong Institute of Gastroenterology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | | | - Lili Feng
- Guangdong Institute of Gastroenterology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China.,Department of Radiation Oncology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Zhenhui Li
- Department of Pathology, Yunnan Cancer Hospital, Kunming, China
| | - Yuqi Fang
- Tencent AI Lab, Shenzhen, China.,Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Xinjuan Fan
- Department of Pathology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China.,Guangdong Institute of Gastroenterology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yihong Ling
- Department of Pathology, Cancer Center of Sun Yat-sen University, Guangzhou, China
| | - Hailing Liu
- Department of Pathology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China.,Guangdong Institute of Gastroenterology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xuan Zou
- Department of Pathology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China.,Guangdong Institute of Gastroenterology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Jing Wang
- Department of Pathology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China.,Guangdong Institute of Gastroenterology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | | | - Jingping Yun
- Department of Pathology, Cancer Center of Sun Yat-sen University, Guangzhou, China
| | | | - Yan Huang
- Department of Pathology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China.,Guangdong Institute of Gastroenterology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
23
|
Sheikh TS, Kim JY, Shim J, Cho M. Unsupervised Learning Based on Multiple Descriptors for WSIs Diagnosis. Diagnostics (Basel) 2022; 12:diagnostics12061480. [PMID: 35741289 PMCID: PMC9222016 DOI: 10.3390/diagnostics12061480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/11/2022] [Accepted: 06/14/2022] [Indexed: 11/16/2022] Open
Abstract
An automatic pathological diagnosis is a challenging task because histopathological images with different cellular heterogeneity representations are sometimes limited. To overcome this, we investigated how the holistic and local appearance features with limited information can be fused to enhance the analysis performance. We propose an unsupervised deep learning model for whole-slide image diagnosis, which uses stacked autoencoders simultaneously feeding multiple-image descriptors such as the histogram of oriented gradients and local binary patterns along with the original image to fuse the heterogeneous features. The pre-trained latent vectors are extracted from each autoencoder, and these fused feature representations are utilized for classification. We observed that training with additional descriptors helps the model to overcome the limitations of multiple variants and the intricate cellular structure of histopathology data by various experiments. Our model outperforms existing state-of-the-art approaches by achieving the highest accuracies of 87.2 for ICIAR2018, 94.6 for Dartmouth, and other significant metrics for public benchmark datasets. Our model does not rely on a specific set of pre-trained features based on classifiers to achieve high performance. Unsupervised spaces are learned from the number of independent multiple descriptors and can be used with different variants of classifiers to classify cancer diseases from whole-slide images. Furthermore, we found that the proposed model classifies the types of breast and lung cancer similar to the viewpoint of pathologists by visualization. We also designed our whole-slide image processing toolbox to extract and process the patches from whole-slide images.
Collapse
Affiliation(s)
| | - Jee-Yeon Kim
- Department of Pathology, Pusan National University Yangsan Hospital, School of Medicine, Pusan National University, Yangsan-si 50612, Korea;
| | - Jaesool Shim
- School of Mechanical Engineering, Yeungnam University, Gyeongsan 38541, Korea
- Correspondence: (J.S.); (M.C.)
| | - Migyung Cho
- Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Korea;
- Correspondence: (J.S.); (M.C.)
| |
Collapse
|
24
|
Thiagarajan P, Khairnar P, Ghosh S. Explanation and Use of Uncertainty Quantified by Bayesian Neural Network Classifiers for Breast Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:815-825. [PMID: 34699354 DOI: 10.1109/tmi.2021.3123300] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Despite the promise of Convolutional neural network (CNN) based classification models for histopathological images, it is infeasible to quantify its uncertainties. Moreover, CNNs may suffer from overfitting when the data is biased. We show that Bayesian-CNN can overcome these limitations by regularizing automatically and by quantifying the uncertainty. We have developed a novel technique to utilize the uncertainties provided by the Bayesian-CNN that significantly improves the performance on a large fraction of the test data (about 6% improvement in accuracy on 77% of test data). Further, we provide a novel explanation for the uncertainty by projecting the data into a low dimensional space through a nonlinear dimensionality reduction technique. This dimensionality reduction enables interpretation of the test data through visualization and reveals the structure of the data in a low dimensional feature space. We show that the Bayesian-CNN can perform much better than the state-of-the-art transfer learning CNN (TL-CNN) by reducing the false negative and false positive by 11% and 7.7% respectively for the present data set. It achieves this performance with only 1.86 million parameters as compared to 134.33 million for TL-CNN. Besides, we modify the Bayesian-CNN by introducing a stochastic adaptive activation function. The modified Bayesian-CNN performs slightly better than Bayesian-CNN on all performance metrics and significantly reduces the number of false negatives and false positives (3% reduction for both). We also show that these results are statistically significant by performing McNemar's statistical significance test. This work shows the advantages of Bayesian-CNN against the state-of-the-art, explains and utilizes the uncertainties for histopathological images. It should find applications in various medical image classifications.
Collapse
|
25
|
Mehta S, Lu X, Wu W, Weaver D, Hajishirzi H, Elmore JG, Shapiro LG. End-to-End Diagnosis of Breast Biopsy Images with Transformers. Med Image Anal 2022; 79:102466. [PMID: 35525135 PMCID: PMC10162595 DOI: 10.1016/j.media.2022.102466] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 02/25/2022] [Accepted: 04/18/2022] [Indexed: 01/18/2023]
Abstract
Diagnostic disagreements among pathologists occur throughout the spectrum of benign to malignant lesions. A computer-aided diagnostic system capable of reducing uncertainties would have important clinical impact. To develop a computer-aided diagnosis method for classifying breast biopsy images into a range of diagnostic categories (benign, atypia, ductal carcinoma in situ, and invasive breast cancer), we introduce a transformer-based hollistic attention network called HATNet. Unlike state-of-the-art histopathological image classification systems that use a two pronged approach, i.e., they first learn local representations using a multi-instance learning framework and then combine these local representations to produce image-level decisions, HATNet streamlines the histopathological image classification pipeline and shows how to learn representations from gigapixel size images end-to-end. HATNet extends the bag-of-words approach and uses self-attention to encode global information, allowing it to learn representations from clinically relevant tissue structures without any explicit supervision. It outperforms the previous best network Y-Net, which uses supervision in the form of tissue-level segmentation masks, by 8%. Importantly, our analysis reveals that HATNet learns representations from clinically relevant structures, and it matches the classification accuracy of 87 U.S. pathologists for this challenging test set.
Collapse
Affiliation(s)
| | - Ximing Lu
- University of Washington, Seattle, USA
| | - Wenjun Wu
- University of Washington, Seattle, USA
| | - Donald Weaver
- Department of Pathology, The University of Vermont College of Medicine, USA
| | | | - Joann G Elmore
- David Geffen School of Medicine, University of California, Los Angeles, USA
| | | |
Collapse
|
26
|
Zhu J, Liu M, Li X. Progress on deep learning in digital pathology of breast cancer: a narrative review. Gland Surg 2022; 11:751-766. [PMID: 35531111 PMCID: PMC9068546 DOI: 10.21037/gs-22-11] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/04/2022] [Indexed: 01/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Pathology is the gold standard criteria for breast cancer diagnosis and has important guiding value in formulating the clinical treatment plan and predicting the prognosis. However, traditional microscopic examinations of tissue sections are time consuming and labor intensive, with unavoidable subjective variations. Deep learning (DL) can evaluate and extract the most important information from images with less need for human instruction, providing a promising approach to assist in the pathological diagnosis of breast cancer. To provide an informative and up-to-date summary on the topic of DL-based diagnostic systems for breast cancer pathology image analysis and discuss the advantages and challenges to the routine clinical application of digital pathology. METHODS A PubMed search with keywords ("breast neoplasm" or "breast cancer") and ("pathology" or "histopathology") and ("artificial intelligence" or "deep learning") was conducted. Relevant publications in English published from January 2000 to October 2021 were screened manually for their title, abstract, and even full text to determine their true relevance. References from the searched articles and other supplementary articles were also studied. KEY CONTENT AND FINDINGS DL-based computerized image analysis has obtained impressive achievements in breast cancer pathology diagnosis, classification, grading, staging, and prognostic prediction, providing powerful methods for faster, more reproducible, and more precise diagnoses. However, all artificial intelligence (AI)-assisted pathology diagnostic models are still in the experimental stage. Improving their economic efficiency and clinical adaptability are still required to be developed as the focus of further researches. CONCLUSIONS Having searched PubMed and other databases and summarized the application of DL-based AI models in breast cancer pathology, we conclude that DL is undoubtedly a promising tool for assisting pathologists in routines, but further studies are needed to realize the digitization and automation of clinical pathology.
Collapse
Affiliation(s)
- Jingjin Zhu
- School of Medicine, Nankai University, Tianjin, China
| | - Mei Liu
- Department of Pathology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiru Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| |
Collapse
|
27
|
Renal Cancer Detection: Fusing Deep and Texture Features from Histopathology Images. BIOMED RESEARCH INTERNATIONAL 2022; 2022:9821773. [PMID: 35386304 PMCID: PMC8979690 DOI: 10.1155/2022/9821773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 02/16/2022] [Accepted: 02/21/2022] [Indexed: 11/18/2022]
Abstract
Histopathological images contain morphological markers of disease progression that have diagnostic and predictive values, with many computer-aided diagnosis systems using common deep learning methods that have been proposed to save time and labour. Even though deep learning methods are an end-to-end method, they perform exceptionally well given a large dataset and often show relatively inferior results for a small dataset. In contrast, traditional feature extraction methods have greater robustness and perform well with a small/medium dataset. Moreover, a texture representation-based global approach is commonly used to classify histological tissue images expect in explicit segmentation to extract the structure properties. Considering the scarcity of medical datasets and the usefulness of texture representation, we would like to integrate both the advantages of deep learning and traditional machine learning, i.e., texture representation. To accomplish this task, we proposed a classification model to detect renal cancer using a histopathology dataset by fusing the features from a deep learning model with the extracted texture feature descriptors. Here, five texture feature descriptors from three texture feature families were applied to complement Alex-Net for the extensive validation of the fusion between the deep features and texture features. The texture features are from (1) statistic feature family: histogram of gradient, gray-level cooccurrence matrix, and local binary pattern; (2) transform-based texture feature family: Gabor filters; and (3) model-based texture feature family: Markov random field. The final experimental results for classification outperformed both Alex-Net and a singular texture descriptor, showing the effectiveness of combining the deep features and texture features in renal cancer detection.
Collapse
|
28
|
Suri JS, Bhagawati M, Paul S, Protogerou AD, Sfikakis PP, Kitas GD, Khanna NN, Ruzsa Z, Sharma AM, Saxena S, Faa G, Laird JR, Johri AM, Kalra MK, Paraskevas KI, Saba L. A Powerful Paradigm for Cardiovascular Risk Stratification Using Multiclass, Multi-Label, and Ensemble-Based Machine Learning Paradigms: A Narrative Review. Diagnostics (Basel) 2022; 12:722. [PMID: 35328275 PMCID: PMC8947682 DOI: 10.3390/diagnostics12030722] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 03/10/2022] [Accepted: 03/13/2022] [Indexed: 12/16/2022] Open
Abstract
Background and Motivation: Cardiovascular disease (CVD) causes the highest mortality globally. With escalating healthcare costs, early non-invasive CVD risk assessment is vital. Conventional methods have shown poor performance compared to more recent and fast-evolving Artificial Intelligence (AI) methods. The proposed study reviews the three most recent paradigms for CVD risk assessment, namely multiclass, multi-label, and ensemble-based methods in (i) office-based and (ii) stress-test laboratories. Methods: A total of 265 CVD-based studies were selected using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) model. Due to its popularity and recent development, the study analyzed the above three paradigms using machine learning (ML) frameworks. We review comprehensively these three methods using attributes, such as architecture, applications, pro-and-cons, scientific validation, clinical evaluation, and AI risk-of-bias (RoB) in the CVD framework. These ML techniques were then extended under mobile and cloud-based infrastructure. Findings: Most popular biomarkers used were office-based, laboratory-based, image-based phenotypes, and medication usage. Surrogate carotid scanning for coronary artery risk prediction had shown promising results. Ground truth (GT) selection for AI-based training along with scientific and clinical validation is very important for CVD stratification to avoid RoB. It was observed that the most popular classification paradigm is multiclass followed by the ensemble, and multi-label. The use of deep learning techniques in CVD risk stratification is in a very early stage of development. Mobile and cloud-based AI technologies are more likely to be the future. Conclusions: AI-based methods for CVD risk assessment are most promising and successful. Choice of GT is most vital in AI-based models to prevent the RoB. The amalgamation of image-based strategies with conventional risk factors provides the highest stability when using the three CVD paradigms in non-cloud and cloud-based frameworks.
Collapse
Affiliation(s)
- Jasjit S. Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA
| | - Mrinalini Bhagawati
- Department of Biomedical Engineering, North-Eastern Hill University, Shillong 793022, India; (M.B.); (S.P.)
| | - Sudip Paul
- Department of Biomedical Engineering, North-Eastern Hill University, Shillong 793022, India; (M.B.); (S.P.)
| | - Athanasios D. Protogerou
- Research Unit Clinic, Laboratory of Pathophysiology, Department of Cardiovascular Prevention, National and Kapodistrian University of Athens, 11527 Athens, Greece;
| | - Petros P. Sfikakis
- Rheumatology Unit, National Kapodistrian University of Athens, 11527 Athens, Greece;
| | - George D. Kitas
- Arthritis Research UK Centre for Epidemiology, Manchester University, Manchester 46962, UK;
| | - Narendra N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110020, India;
| | - Zoltan Ruzsa
- Department of Internal Medicines, Invasive Cardiology Division, University of Szeged, 6720 Szeged, Hungary;
| | - Aditya M. Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA 22903, USA;
| | - Sanjay Saxena
- Department of CSE, International Institute of Information Technology, Bhubaneswar 751003, India;
| | - Gavino Faa
- Department of Pathology, A.O.U., di Cagliari-Polo di Monserrato s.s., 09045 Cagliari, Italy;
| | - John R. Laird
- Cardiology Department, St. Helena Hospital, St. Helena, CA 94574, USA;
| | - Amer M. Johri
- Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada;
| | - Manudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA;
| | - Kosmas I. Paraskevas
- Department of Vascular Surgery, Central Clinic of Athens, N. Iraklio, 14122 Athens, Greece;
| | - Luca Saba
- Department of Radiology, A.O.U., di Cagliari-Polo di Monserrato s.s., 09045 Cagliari, Italy;
| |
Collapse
|
29
|
Li X, Jiang Y, Zhang J, Li M, Luo H, Yin S. Lesion-attention pyramid network for diabetic retinopathy grading. Artif Intell Med 2022; 126:102259. [PMID: 35346445 DOI: 10.1016/j.artmed.2022.102259] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 01/13/2022] [Accepted: 02/16/2022] [Indexed: 02/01/2023]
|
30
|
A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches. Artif Intell Rev 2022. [DOI: 10.1007/s10462-021-10121-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
31
|
Artificial intelligence as a tool for diagnosis in digital pathology whole slide images: A systematic review. J Pathol Inform 2022; 13:100138. [PMID: 36268059 PMCID: PMC9577128 DOI: 10.1016/j.jpi.2022.100138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 09/04/2022] [Accepted: 09/04/2022] [Indexed: 12/22/2022] Open
Abstract
Digital pathology had a recent growth, stimulated by the implementation of digital whole slide images (WSIs) in clinical practice, and the pathology field faces shortage of pathologists in the last few years. This scenario created fronts of research applying artificial intelligence (AI) to help pathologists. One of them is the automated diagnosis, helping in the clinical decision support, increasing efficiency and quality of diagnosis. However, the complexity nature of the WSIs requires special treatments to create a reliable AI model for diagnosis. Therefore, we systematically reviewed the literature to analyze and discuss all the methods and results in AI in digital pathology performed in WSIs on H&E stain, investigating the capacity of AI as a diagnostic support tool for the pathologist in the routine real-world scenario. This review analyzes 26 studies, reporting in detail all the best methods to apply AI as a diagnostic tool, as well as the main limitations, and suggests new ideas to improve the AI field in digital pathology as a whole. We hope that this study could lead to a better use of AI as a diagnostic tool in pathology, helping future researchers in the development of new studies and projects.
Collapse
|
32
|
Ye Q, Zhang Q, Tian Y, Zhou T, Ge H, Wu J, Lu N, Bai X, Liang T, Li J. Method of Tumor Pathological Micronecrosis Quantification Via Deep Learning From Label Fuzzy Proportions. IEEE J Biomed Health Inform 2021; 25:3288-3299. [PMID: 33822729 DOI: 10.1109/jbhi.2021.3071276] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The presence of necrosis is associated with tumor progression and patient outcomes in many cancers, but existing analyses rarely adopt quantitative methods because the manual quantification of histopathological features is too expensive. We aim to accurately identify necrotic regions on hematoxylin and eosin (HE)-stained slides and to calculate the ratio of necrosis with minimal annotations on the images. An adaptive method named Learning from Label Fuzzy Proportions (LLFP) was introduced to histopathological image analysis. Two datasets of liver cancer HE slides were collected to verify the feasibility of the method by training on the internal set using cross validation and performing validation on the external set, along with ensemble learning to improve performance. The models from cross validation performed relatively stably in identifying necrosis, with a Concordance Index of the Slide Necrosis Score (CISNS) of 0.9165±0.0089 in the internal test set. The integration model improved the CISNS to 0.9341 and achieved a CISNS of 0.8278 on the external set. There were significant differences in survival (p = 0.0060) between the three groups divided according to the calculated necrosis ratio. The proposed method can build an integration model good at distinguishing necrosis and capable of clinical assistance as an automatic tool to stratify patients with different risks or as a cluster tool for the quantification of histopathological features. We presented a method effective for identifying histopathological features and suggested that the extent of necrosis, especially micronecrosis, in liver cancer is related to patient outcomes.
Collapse
|
33
|
DiPalma J, Suriawinata AA, Tafe LJ, Torresani L, Hassanpour S. Resolution-based distillation for efficient histology image classification. Artif Intell Med 2021; 119:102136. [PMID: 34531005 PMCID: PMC8449014 DOI: 10.1016/j.artmed.2021.102136] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 07/07/2021] [Accepted: 08/02/2021] [Indexed: 12/14/2022]
Abstract
Developing deep learning models to analyze histology images has been computationally challenging, as the massive size of the images causes excessive strain on all parts of the computing pipeline. This paper proposes a novel deep learning-based methodology for improving the computational efficiency of histology image classification. The proposed approach is robust when used with images that have reduced input resolution, and it can be trained effectively with limited labeled data. Moreover, our approach operates at either the tissue- or slide-level, removing the need for laborious patch-level labeling. Our method uses knowledge distillation to transfer knowledge from a teacher model pre-trained at high resolution to a student model trained on the same images at a considerably lower resolution. Also, to address the lack of large-scale labeled histology image datasets, we perform the knowledge distillation in a self-supervised fashion. We evaluate our approach on three distinct histology image datasets associated with celiac disease, lung adenocarcinoma, and renal cell carcinoma. Our results on these datasets demonstrate that a combination of knowledge distillation and self-supervision allows the student model to approach and, in some cases, surpass the teacher model's classification accuracy while being much more computationally efficient. Additionally, we observe an increase in student classification performance as the size of the unlabeled dataset increases, indicating that there is potential for this method to scale further with additional unlabeled data. Our model outperforms the high-resolution teacher model for celiac disease in accuracy, F1-score, precision, and recall while requiring 4 times fewer computations. For lung adenocarcinoma, our results at 1.25× magnification are within 1.5% of the results for the teacher model at 10× magnification, with a reduction in computational cost by a factor of 64. Our model on renal cell carcinoma at 1.25× magnification performs within 1% of the teacher model at 5× magnification while requiring 16 times fewer computations. Furthermore, our celiac disease outcomes benefit from additional performance scaling with the use of more unlabeled data. In the case of 0.625× magnification, using unlabeled data improves accuracy by 4% over the tissue-level baseline. Therefore, our approach can improve the feasibility of deep learning solutions for digital pathology on standard computational hardware and infrastructures.
Collapse
Affiliation(s)
- Joseph DiPalma
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Arief A Suriawinata
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA
| | - Laura J Tafe
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA
| | - Lorenzo Torresani
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Saeed Hassanpour
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA; Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA; Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA.
| |
Collapse
|
34
|
Yu H, Zhang X, Song L, Jiang L, Huang X, Chen W, Zhang C, Li J, Yang J, Hu Z, Duan Q, Chen W, He X, Fan J, Jiang W, Zhang L, Qiu C, Gu M, Sun W, Zhang Y, Peng G, Shen W, Fu G. Large-scale gastric cancer screening and localization using multi-task deep neural network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
35
|
Klein C, Zeng Q, Arbaretaz F, Devêvre E, Calderaro J, Lomenie N, Maiuri MC. Artificial Intelligence for solid tumor diagnosis in digital pathology. Br J Pharmacol 2021; 178:4291-4315. [PMID: 34302297 DOI: 10.1111/bph.15633] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 02/05/2021] [Accepted: 02/07/2021] [Indexed: 11/30/2022] Open
Abstract
Tumor diagnosis relies on the visual examination of histological slides by pathologists through a microscope eyepiece. Digital pathology, the digitalization of histological slides at high magnification with slides scanners, has raised the opportunity to extract quantitative information thanks to image analysis. In the last decade, medical image analysis has made exceptional progress due to the development of artificial intelligence (AI) algorithms. AI has been successfully used in the field of medical imaging and more recently in digital pathology. The feasibility and usefulness of AI assisted pathology tasks have been demonstrated in the very last years and we can expect those developments to be applied on routine histopathology in the future. In this review, we will describe and illustrate this technique and present the most recent applications in the field of tumor histopathology.
Collapse
Affiliation(s)
- Christophe Klein
- Centre de recherche des Cordeliers, Centre d'Imagerie, Histologie et Cytométrie (CHIC), INSERM, Sorbonne Université, Université de Paris, Paris, France
| | - Qinghe Zeng
- Centre de recherche des Cordeliers, Centre d'Imagerie, Histologie et Cytométrie (CHIC), INSERM, Sorbonne Université, Université de Paris, Paris, France.,Laboratoire d'informatique Paris Descartes (LIPADE), Université de Paris, Paris, France
| | - Floriane Arbaretaz
- Centre de recherche des Cordeliers, Centre d'Imagerie, Histologie et Cytométrie (CHIC), INSERM, Sorbonne Université, Université de Paris, Paris, France
| | - Estelle Devêvre
- Centre de recherche des Cordeliers, Centre d'Imagerie, Histologie et Cytométrie (CHIC), INSERM, Sorbonne Université, Université de Paris, Paris, France
| | - Julien Calderaro
- Département de pathologie, Hôpital Henri Mondor, Créteil, France
| | - Nicolas Lomenie
- Laboratoire d'informatique Paris Descartes (LIPADE), Université de Paris, Paris, France
| | - Maria Chiara Maiuri
- Centre de recherche des Cordeliers, Centre d'Imagerie, Histologie et Cytométrie (CHIC), INSERM, Sorbonne Université, Université de Paris, Paris, France
| |
Collapse
|
36
|
Pinckaers H, Bulten W, van der Laak J, Litjens G. Detection of Prostate Cancer in Whole-Slide Images Through End-to-End Training With Image-Level Labels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1817-1826. [PMID: 33729928 DOI: 10.1109/tmi.2021.3066295] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Prostate cancer is the most prevalent cancer among men in Western countries, with 1.1 million new diagnoses every year. The gold standard for the diagnosis of prostate cancer is a pathologists' evaluation of prostate tissue. To potentially assist pathologists deep / learning / based cancer detection systems have been developed. Many of the state-of-the-art models are patch / based convolutional neural networks, as the use of entire scanned slides is hampered by memory limitations on accelerator cards. Patch-based systems typically require detailed, pixel-level annotations for effective training. However, such annotations are seldom readily available, in contrast to the clinical reports of pathologists, which contain slide-level labels. As such, developing algorithms which do not require manual pixel-wise annotations, but can learn using only the clinical report would be a significant advancement for the field. In this paper, we propose to use a streaming implementation of convolutional layers, to train a modern CNN (ResNet / 34) with 21 million parameters end-to-end on 4712 prostate biopsies. The method enables the use of entire biopsy images at high-resolution directly by reducing the GPU memory requirements by 2.4 TB. We show that modern CNNs, trained using our streaming approach, can extract meaningful features from high-resolution images without additional heuristics, reaching similar performance as state-of-the-art patch-based and multiple-instance learning methods. By circumventing the need for manual annotations, this approach can function as a blueprint for other tasks in histopathological diagnosis. The source code to reproduce the streaming models is available at https://github.com/DIAGNijmegen/ pathology-streaming-pipeline.
Collapse
|
37
|
Mercan C, Aygunes B, Aksoy S, Mercan E, Shapiro LG, Weaver DL, Elmore JG. Deep Feature Representations for Variable-Sized Regions of Interest in Breast Histopathology. IEEE J Biomed Health Inform 2021; 25:2041-2049. [PMID: 33166257 PMCID: PMC8274968 DOI: 10.1109/jbhi.2020.3036734] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Modeling variable-sized regions of interest (ROIs) in whole slide images using deep convolutional networks is a challenging task, as these networks typically require fixed-sized inputs that should contain sufficient structural and contextual information for classification. We propose a deep feature extraction framework that builds an ROI-level feature representation via weighted aggregation of the representations of variable numbers of fixed-sized patches sampled from nuclei-dense regions in breast histopathology images. METHODS First, the initial patch-level feature representations are extracted from both fully-connected layer activations and pixel-level convolutional layer activations of a deep network, and the weights are obtained from the class predictions of the same network trained on patch samples. Then, the final patch-level feature representations are computed by concatenation of weighted instances of the extracted feature activations. Finally, the ROI-level representation is obtained by fusion of the patch-level representations by average pooling. RESULTS Experiments using a well-characterized data set of 240 slides containing 437 ROIs marked by experienced pathologists with variable sizes and shapes result in an accuracy score of 72.65% in classifying ROIs into four diagnostic categories that cover the whole histologic spectrum. CONCLUSION The results show that the proposed feature representations are superior to existing approaches and provide accuracies that are higher than the average accuracy of another set of pathologists. SIGNIFICANCE The proposed generic representation that can be extracted from any type of deep convolutional architecture combines the patch appearance information captured by the network activations and the diagnostic relevance predicted by the class-specific scoring of patches for effective modeling of variable-sized ROIs.
Collapse
|
38
|
Li J, Li W, Sisk A, Ye H, Wallace WD, Speier W, Arnold CW. A multi-resolution model for histopathology image classification and localization with multiple instance learning. Comput Biol Med 2021; 131:104253. [PMID: 33601084 DOI: 10.1016/j.compbiomed.2021.104253] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 01/31/2021] [Accepted: 02/03/2021] [Indexed: 12/17/2022]
Abstract
Large numbers of histopathological images have been digitized into high resolution whole slide images, opening opportunities in developing computational image analysis tools to reduce pathologists' workload and potentially improve inter- and intra-observer agreement. Most previous work on whole slide image analysis has focused on classification or segmentation of small pre-selected regions-of-interest, which requires fine-grained annotation and is non-trivial to extend for large-scale whole slide analysis. In this paper, we proposed a multi-resolution multiple instance learning model that leverages saliency maps to detect suspicious regions for fine-grained grade prediction. Instead of relying on expensive region- or pixel-level annotations, our model can be trained end-to-end with only slide-level labels. The model is developed on a large-scale prostate biopsy dataset containing 20,229 slides from 830 patients. The model achieved 92.7% accuracy, 81.8% Cohen's Kappa for benign, low grade (i.e. Grade group 1) and high grade (i.e. Grade group ≥ 2) prediction, an area under the receiver operating characteristic curve (AUROC) of 98.2% and an average precision (AP) of 97.4% for differentiating malignant and benign slides. The model obtained an AUROC of 99.4% and an AP of 99.8% for cancer detection on an external dataset.
Collapse
Affiliation(s)
- Jiayun Li
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA.
| | - Wenyuan Li
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA
| | - Anthony Sisk
- Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA
| | - Huihui Ye
- Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA
| | - W Dean Wallace
- Department of Pathology, USC, 2011 Zonal Avenue, Los Angeles, CA, 90033, USA
| | - William Speier
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA
| | - Corey W Arnold
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA.
| |
Collapse
|
39
|
Li B, Mercan E, Mehta S, Knezevich S, Arnold CW, Weaver DL, Elmore JG, Shapiro LG. Classifying Breast Histopathology Images with a Ductal Instance-Oriented Pipeline. PROCEEDINGS OF THE ... IAPR INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION. INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION 2021; 2020:8727-8734. [PMID: 36745147 PMCID: PMC9893896 DOI: 10.1109/icpr48806.2021.9412824] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this study, we propose the Ductal Instance-Oriented Pipeline (DIOP) that contains a duct-level instance segmentation model, a tissue-level semantic segmentation model, and three-levels of features for diagnostic classification. Based on recent advancements in instance segmentation and the Mask RCNN model, our duct-level segmenter tries to identify each ductal individual inside a microscopic image; then, it extracts tissue-level information from the identified ductal instances. Leveraging three levels of information obtained from these ductal instances and also the histopathology image, the proposed DIOP outperforms previous approaches (both feature-based and CNN-based) in all diagnostic tasks; for the four-way classification task, the DIOP achieves comparable performance to general pathologists in this unique dataset. The proposed DIOP only takes a few seconds to run in the inference time, which could be used interactively on most modern computers. More clinical explorations are needed to study the robustness and generalizability of this system in the future.
Collapse
Affiliation(s)
- Beibin Li
- University of Washington, Seattle, WA,Seattle Children’s Hospital, Seattle, WA
| | | | | | | | | | | | | | | |
Collapse
|
40
|
Wu W, Mehta S, Nofallah S, Knezevich S, May CJ, Chang OH, Elmore JG, Shapiro LG. Scale-Aware Transformers for Diagnosing Melanocytic Lesions. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:163526-163541. [PMID: 35211363 PMCID: PMC8865389 DOI: 10.1109/access.2021.3132958] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Diagnosing melanocytic lesions is one of the most challenging areas of pathology with extensive intra- and inter-observer variability. The gold standard for a diagnosis of invasive melanoma is the examination of histopathological whole slide skin biopsy images by an experienced dermatopathologist. Digitized whole slide images offer novel opportunities for computer programs to improve the diagnostic performance of pathologists. In order to automatically classify such images, representations that reflect the content and context of the input images are needed. In this paper, we introduce a novel self-attention-based network to learn representations from digital whole slide images of melanocytic skin lesions at multiple scales. Our model softly weighs representations from multiple scales, allowing it to discriminate between diagnosis-relevant and -irrelevant information automatically. Our experiments show that our method outperforms five other state-of-the-art whole slide image classification methods by a significant margin. Our method also achieves comparable performance to 187 practicing U.S. pathologists who interpreted the same cases in an independent study. To facilitate relevant research, full training and inference code is made publicly available at https://github.com/meredith-wenjunwu/ScATNet.
Collapse
Affiliation(s)
- Wenjun Wu
- Department of Medical Education and Biomedical Informatics, University of Washington, Seattle, WA 98195, USA
| | - Sachin Mehta
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
| | - Shima Nofallah
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
| | | | | | - Oliver H Chang
- Department of Pathology, University of Washington, Seattle, WA 98195, USA
| | - Joann G Elmore
- David Geffen School of Medicine, UCLA, Los Angeles, CA 90024, USA
| | - Linda G Shapiro
- Department of Medical Education and Biomedical Informatics, University of Washington, Seattle, WA 98195, USA
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
41
|
Tavolara TE, Niazi MKK, Ginese M, Piedra-Mora C, Gatti DM, Beamer G, Gurcan MN. Automatic discovery of clinically interpretable imaging biomarkers for Mycobacterium tuberculosis supersusceptibility using deep learning. EBioMedicine 2020; 62:103094. [PMID: 33166789 PMCID: PMC7658666 DOI: 10.1016/j.ebiom.2020.103094] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 10/09/2020] [Accepted: 10/12/2020] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Identifying which individuals will develop tuberculosis (TB) remains an unresolved problem due to few animal models and computational approaches that effectively address its heterogeneity. To meet these shortcomings, we show that Diversity Outbred (DO) mice reflect human-like genetic diversity and develop human-like lung granulomas when infected with Mycobacterium tuberculosis (M.tb) . METHODS Following M.tb infection, a "supersusceptible" phenotype develops in approximately one-third of DO mice characterized by rapid morbidity and mortality within 8 weeks. These supersusceptible DO mice develop lung granulomas patterns akin to humans. This led us to utilize deep learning to identify supersusceptibility from hematoxylin & eosin (H&E) lung tissue sections utilizing only clinical outcomes (supersusceptible or not-supersusceptible) as labels. FINDINGS The proposed machine learning model diagnosed supersusceptibility with high accuracy (91.50 ± 4.68%) compared to two expert pathologists using H&E stained lung sections (94.95% and 94.58%). Two non-experts used the imaging biomarker to diagnose supersusceptibility with high accuracy (88.25% and 87.95%) and agreement (96.00%). A board-certified veterinary pathologist (GB) examined the imaging biomarker and determined the model was making diagnostic decisions using a form of granuloma necrosis (karyorrhectic and pyknotic nuclear debris). This was corroborated by one other board-certified veterinary pathologist. Finally, the imaging biomarker was quantified, providing a novel means to convert visual patterns within granulomas to data suitable for statistical analyses. IMPLICATIONS Overall, our results have translatable implication to improve our understanding of TB and also to the broader field of computational pathology in which clinical outcomes alone can drive automatic identification of interpretable imaging biomarkers, knowledge discovery, and validation of existing clinical biomarkers. FUNDING National Institutes of Health and American Lung Association.
Collapse
Affiliation(s)
- Thomas E Tavolara
- Center for Biomedical Informatics, Wake Forest School of Medicine, 486 Patterson Avenue, Winston-Salem, NC 27101, United States
| | - M Khalid Khan Niazi
- Center for Biomedical Informatics, Wake Forest School of Medicine, 486 Patterson Avenue, Winston-Salem, NC 27101, United States.
| | - Melanie Ginese
- Department of Infectious Disease and Global Health, Tufts University Cummings School of Veterinary Medicine, 200 Westboro Rd., North Grafton, MA 01536, United States
| | - Cesar Piedra-Mora
- Department of Biomedical Sciences, Tufts University Cummings School of Veterinary Medicine, 200 Westboro Rd., North Grafton, MA 01536, United States
| | - Daniel M Gatti
- The College of the Atlantic, 105 Eden Street, Bar Harbor, ME 04609, United States
| | - Gillian Beamer
- Department of Infectious Disease and Global Health, Tufts University Cummings School of Veterinary Medicine, 200 Westboro Rd., North Grafton, MA 01536, United States
| | - Metin N Gurcan
- Center for Biomedical Informatics, Wake Forest School of Medicine, 486 Patterson Avenue, Winston-Salem, NC 27101, United States
| |
Collapse
|
42
|
Yao J, Zhu X, Jonnagaddala J, Hawkins N, Huang J. Whole slide images based cancer survival prediction using attention guided deep multiple instance learning networks. Med Image Anal 2020; 65:101789. [DOI: 10.1016/j.media.2020.101789] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Revised: 06/29/2020] [Accepted: 07/16/2020] [Indexed: 01/25/2023]
|
43
|
Automatic detection of non-perfusion areas in diabetic macular edema from fundus fluorescein angiography for decision making using deep learning. Sci Rep 2020; 10:15138. [PMID: 32934283 PMCID: PMC7492239 DOI: 10.1038/s41598-020-71622-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 07/30/2020] [Indexed: 02/05/2023] Open
Abstract
Vision loss caused by diabetic macular edema (DME) can be prevented by early detection and laser photocoagulation. As there is no comprehensive detection technique to recognize NPA, we proposed an automatic detection method of NPA on fundus fluorescein angiography (FFA) in DME. The study included 3,014 FFA images of 221 patients with DME. We use 3 convolutional neural networks (CNNs), including DenseNet, ResNet50, and VGG16, to identify non-perfusion regions (NP), microaneurysms, and leakages in FFA images. The NPA was segmented using attention U-net. To validate its performance, we applied our detection algorithm on 249 FFA images in which the NPA areas were manually delineated by 3 ophthalmologists. For DR lesion classification, area under the curve is 0.8855 for NP regions, 0.9782 for microaneurysms, and 0.9765 for leakage classifier. The average precision of NP region overlap ratio is 0.643. NP regions of DME in FFA images are identified based a new automated deep learning algorithm. This study is an in-depth study from computer-aided diagnosis to treatment, and will be the theoretical basis for the application of intelligent guided laser.
Collapse
|
44
|
Wan T, Zhao L, Feng H, Li D, Tong C, Qin Z. Robust nuclei segmentation in histopathology using ASPPU-Net and boundary refinement. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.08.103] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
45
|
Wang X, Chen H, Gan C, Lin H, Dou Q, Tsougenis E, Huang Q, Cai M, Heng PA. Weakly Supervised Deep Learning for Whole Slide Lung Cancer Image Analysis. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:3950-3962. [PMID: 31484154 DOI: 10.1109/tcyb.2019.2935141] [Citation(s) in RCA: 126] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Histopathology image analysis serves as the gold standard for cancer diagnosis. Efficient and precise diagnosis is quite critical for the subsequent therapeutic treatment of patients. So far, computer-aided diagnosis has not been widely applied in pathological field yet as currently well-addressed tasks are only the tip of the iceberg. Whole slide image (WSI) classification is a quite challenging problem. First, the scarcity of annotations heavily impedes the pace of developing effective approaches. Pixelwise delineated annotations on WSIs are time consuming and tedious, which poses difficulties in building a large-scale training dataset. In addition, a variety of heterogeneous patterns of tumor existing in high magnification field are actually the major obstacle. Furthermore, a gigapixel scale WSI cannot be directly analyzed due to the immeasurable computational cost. How to design the weakly supervised learning methods to maximize the use of available WSI-level labels that can be readily obtained in clinical practice is quite appealing. To overcome these challenges, we present a weakly supervised approach in this article for fast and effective classification on the whole slide lung cancer images. Our method first takes advantage of a patch-based fully convolutional network (FCN) to retrieve discriminative blocks and provides representative deep features with high efficiency. Then, different context-aware block selection and feature aggregation strategies are explored to generate globally holistic WSI descriptor which is ultimately fed into a random forest (RF) classifier for the image-level prediction. To the best of our knowledge, this is the first study to exploit the potential of image-level labels along with some coarse annotations for weakly supervised learning. A large-scale lung cancer WSI dataset is constructed in this article for evaluation, which validates the effectiveness and feasibility of the proposed method. Extensive experiments demonstrate the superior performance of our method that surpasses the state-of-the-art approaches by a significant margin with an accuracy of 97.3%. In addition, our method also achieves the best performance on the public lung cancer WSIs dataset from The Cancer Genome Atlas (TCGA). We highlight that a small number of coarse annotations can contribute to further accuracy improvement. We believe that weakly supervised learning methods have great potential to assist pathologists in histology image diagnosis in the near future.
Collapse
|
46
|
A Probabilistic Bag-to-Class Approach to Multiple-Instance Learning. DATA 2020. [DOI: 10.3390/data5020056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Multi-instance (MI) learning is a branch of machine learning, where each object (bag) consists of multiple feature vectors (instances)—for example, an image consisting of multiple patches and their corresponding feature vectors. In MI classification, each bag in the training set has a class label, but the instances are unlabeled. The instances are most commonly regarded as a set of points in a multi-dimensional space. Alternatively, instances are viewed as realizations of random vectors with corresponding probability distribution, where the bag is the distribution, not the realizations. By introducing the probability distribution space to bag-level classification problems, dissimilarities between probability distributions (divergences) can be applied. The bag-to-bag Kullback–Leibler information is asymptotically the best classifier, but the typical sparseness of MI training sets is an obstacle. We introduce bag-to-class divergence to MI learning, emphasizing the hierarchical nature of the random vectors that makes bags from the same class different. We propose two properties for bag-to-class divergences, and an additional property for sparse training sets, and propose a dissimilarity measure that fulfils them. Its performance is demonstrated on synthetic and real data. The probability distribution space is valid for MI learning, both for the theoretical analysis and applications.
Collapse
|
47
|
Jiang Y, Yang M, Wang S, Li X, Sun Y. Emerging role of deep learning-based artificial intelligence in tumor pathology. Cancer Commun (Lond) 2020; 40:154-166. [PMID: 32277744 PMCID: PMC7170661 DOI: 10.1002/cac2.12012] [Citation(s) in RCA: 174] [Impact Index Per Article: 43.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 02/06/2020] [Indexed: 12/11/2022] Open
Abstract
The development of digital pathology and progression of state-of-the-art algorithms for computer vision have led to increasing interest in the use of artificial intelligence (AI), especially deep learning (DL)-based AI, in tumor pathology. The DL-based algorithms have been developed to conduct all kinds of work involved in tumor pathology, including tumor diagnosis, subtyping, grading, staging, and prognostic prediction, as well as the identification of pathological features, biomarkers and genetic changes. The applications of AI in pathology not only contribute to improve diagnostic accuracy and objectivity but also reduce the workload of pathologists and subsequently enable them to spend additional time on high-level decision-making tasks. In addition, AI is useful for pathologists to meet the requirements of precision oncology. However, there are still some challenges relating to the implementation of AI, including the issues of algorithm validation and interpretability, computing systems, the unbelieving attitude of pathologists, clinicians and patients, as well as regulators and reimbursements. Herein, we present an overview on how AI-based approaches could be integrated into the workflow of pathologists and discuss the challenges and perspectives of the implementation of AI in tumor pathology.
Collapse
Affiliation(s)
- Yahui Jiang
- Department of PathologyKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P. R. China
| | - Meng Yang
- Department Epidemiology and BiostatisticsKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P.R. China
| | - Shuhao Wang
- Institute for Interdisciplinary Information SciencesTsinghua UniversityBeijing100084P. R. China
| | - Xiangchun Li
- Department Epidemiology and BiostatisticsKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P.R. China
| | - Yan Sun
- Department of PathologyKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P. R. China
| |
Collapse
|
48
|
Zhu D, Zhu H, Liu X, Li H, Wang F, Li H, Feng D. CREDO: Efficient and privacy-preserving multi-level medical pre-diagnosis based on ML-kNN. Inf Sci (N Y) 2020. [DOI: 10.1016/j.ins.2019.11.041] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
49
|
Loukas C, Sgouros NP. Multi‐instance multi‐label learning for surgical image annotation. Int J Med Robot 2020; 16:e2058. [DOI: 10.1002/rcs.2058] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 10/30/2019] [Accepted: 11/06/2019] [Indexed: 12/23/2022]
Affiliation(s)
- Constantinos Loukas
- Laboratory of Medical PhysicsMedical School National and Kapodistrian University of Athens Athens Greece
| | - Nicholas P. Sgouros
- Department of InformaticsNational and Kapodistrian University of Athens Athens Greece
| |
Collapse
|
50
|
Multi-label classification of retinal lesions in diabetic retinopathy for automatic analysis of fundus fluorescein angiography based on deep learning. Graefes Arch Clin Exp Ophthalmol 2020; 258:779-785. [PMID: 31932886 DOI: 10.1007/s00417-019-04575-w] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 10/30/2019] [Accepted: 12/13/2019] [Indexed: 01/04/2023] Open
Abstract
PURPOSE To automatically detect and classify the lesions of diabetic retinopathy (DR) in fundus fluorescein angiography (FFA) images using deep learning algorithm through comparing 3 convolutional neural networks (CNNs). METHODS A total of 4067 FFA images from Eye Center at the Second Affiliated Hospital of Zhejiang University School of Medicine were annotated with 4 kinds of lesions of DR, including non-perfusion regions (NP), microaneurysms, leakages, and laser scars. Three CNNs including DenseNet, ResNet50, and VGG16 were trained to achieve multi-label classification, which means the algorithms could identify 4 retinal lesions above at the same time. RESULTS The area under the curve (AUC) of DenseNet reached 0.8703, 0.9435, 0.9647, and 0.9653 for detecting NP, microaneurysms, leakages, and laser scars, respectively. For ResNet50, AUC was 0.8140 for NP, 0.9097 for microaneurysms, 0.9585 for leakages, and 0.9115 for laser scars. And for VGG16, AUC was 0.7125 for NP, 0.5569 for microaneurysms, 0.9177 for leakages, and 0.8537 for laser scars. CONCLUSIONS Experimental results demonstrate that DenseNet is a suitable model to automatically detect and distinguish retinal lesions in the FFA images with multi-label classification, which lies the foundation of automatic analysis for FFA images and comprehensive diagnosis and treatment decision-making for DR.
Collapse
|