1
|
Zhao B, Deng W, Li ZHH, Zhou C, Gao Z, Wang G, Li X. LESS: Label-efficient multi-scale learning for cytological whole slide image screening. Med Image Anal 2024; 94:103109. [PMID: 38387243 DOI: 10.1016/j.media.2024.103109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/31/2023] [Accepted: 02/15/2024] [Indexed: 02/24/2024]
Abstract
In computational pathology, multiple instance learning (MIL) is widely used to circumvent the computational impasse in giga-pixel whole slide image (WSI) analysis. It usually consists of two stages: patch-level feature extraction and slide-level aggregation. Recently, pretrained models or self-supervised learning have been used to extract patch features, but they suffer from low effectiveness or inefficiency due to overlooking the task-specific supervision provided by slide labels. Here we propose a weakly-supervised Label-Efficient WSI Screening method, dubbed LESS, for cytological WSI analysis with only slide-level labels, which can be effectively applied to small datasets. First, we suggest using variational positive-unlabeled (VPU) learning to uncover hidden labels of both benign and malignant patches. We provide appropriate supervision by using slide-level labels to improve the learning of patch-level features. Next, we take into account the sparse and random arrangement of cells in cytological WSIs. To address this, we propose a strategy to crop patches at multiple scales and utilize a cross-attention vision transformer (CrossViT) to combine information from different scales for WSI classification. The combination of our two steps achieves task-alignment, improving effectiveness and efficiency. We validate the proposed label-efficient method on a urine cytology WSI dataset encompassing 130 samples (13,000 patches) and a breast cytology dataset FNAC 2019 with 212 samples (21,200 patches). The experiment shows that the proposed LESS reaches 84.79%, 85.43%, 91.79% and 78.30% on the urine cytology WSI dataset, and 96.88%, 96.86%, 98.95%, 97.06% on the breast cytology high-resolution-image dataset in terms of accuracy, AUC, sensitivity and specificity. It outperforms state-of-the-art MIL methods on pathology WSIs and realizes automatic cytological WSI cancer screening.
Collapse
Affiliation(s)
- Beidi Zhao
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC V6T 1Z4, Canada; Vector Institute, Toronto, ON M5G 1M1, Canada
| | - Wenlong Deng
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC V6T 1Z4, Canada; Vector Institute, Toronto, ON M5G 1M1, Canada
| | - Zi Han Henry Li
- Department of Pathology, BC Cancer Agency, Vancouver, BC V5Z 4E6, Canada
| | - Chen Zhou
- Department of Pathology, BC Cancer Agency, Vancouver, BC V5Z 4E6, Canada; Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC V6T 2B5, Canada
| | - Zuhua Gao
- Department of Pathology, BC Cancer Agency, Vancouver, BC V5Z 4E6, Canada; Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC V6T 2B5, Canada
| | - Gang Wang
- Department of Pathology, BC Cancer Agency, Vancouver, BC V5Z 4E6, Canada; Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC V6T 2B5, Canada
| | - Xiaoxiao Li
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC V6T 1Z4, Canada; Vector Institute, Toronto, ON M5G 1M1, Canada.
| |
Collapse
|
2
|
Atabansi CC, Nie J, Liu H, Song Q, Yan L, Zhou X. A survey of Transformer applications for histopathological image analysis: New developments and future directions. Biomed Eng Online 2023; 22:96. [PMID: 37749595 PMCID: PMC10518923 DOI: 10.1186/s12938-023-01157-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 09/15/2023] [Indexed: 09/27/2023] Open
Abstract
Transformers have been widely used in many computer vision challenges and have shown the capability of producing better results than convolutional neural networks (CNNs). Taking advantage of capturing long-range contextual information and learning more complex relations in the image data, Transformers have been used and applied to histopathological image processing tasks. In this survey, we make an effort to present a thorough analysis of the uses of Transformers in histopathological image analysis, covering several topics, from the newly built Transformer models to unresolved challenges. To be more precise, we first begin by outlining the fundamental principles of the attention mechanism included in Transformer models and other key frameworks. Second, we analyze Transformer-based applications in the histopathological imaging domain and provide a thorough evaluation of more than 100 research publications across different downstream tasks to cover the most recent innovations, including survival analysis and prediction, segmentation, classification, detection, and representation. Within this survey work, we also compare the performance of CNN-based techniques to Transformers based on recently published papers, highlight major challenges, and provide interesting future research directions. Despite the outstanding performance of the Transformer-based architectures in a number of papers reviewed in this survey, we anticipate that further improvements and exploration of Transformers in the histopathological imaging domain are still required in the future. We hope that this survey paper will give readers in this field of study a thorough understanding of Transformer-based techniques in histopathological image analysis, and an up-to-date paper list summary will be provided at https://github.com/S-domain/Survey-Paper .
Collapse
Affiliation(s)
| | - Jing Nie
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044, China.
| | - Haijun Liu
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044, China
| | - Qianqian Song
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044, China
| | - Lingfeng Yan
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044, China
| | - Xichuan Zhou
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044, China.
| |
Collapse
|