1
|
Zhang S, Yang B, Yang H, Zhao J, Zhang Y, Gao Y, Monteiro O, Zhang K, Liu B, Wang S. Potential rapid intraoperative cancer diagnosis using dynamic full-field optical coherence tomography and deep learning: A prospective cohort study in breast cancer patients. Sci Bull (Beijing) 2024; 69:1748-1756. [PMID: 38702279 DOI: 10.1016/j.scib.2024.03.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 03/18/2024] [Accepted: 03/19/2024] [Indexed: 05/06/2024]
Abstract
An intraoperative diagnosis is critical for precise cancer surgery. However, traditional intraoperative assessments based on hematoxylin and eosin (H&E) histology, such as frozen section, are time-, resource-, and labor-intensive, and involve specimen-consuming concerns. Here, we report a near-real-time automated cancer diagnosis workflow for breast cancer that combines dynamic full-field optical coherence tomography (D-FFOCT), a label-free optical imaging method, and deep learning for bedside tumor diagnosis during surgery. To classify the benign and malignant breast tissues, we conducted a prospective cohort trial. In the modeling group (n = 182), D-FFOCT images were captured from April 26 to June 20, 2018, encompassing 48 benign lesions, 114 invasive ductal carcinoma (IDC), 10 invasive lobular carcinoma, 4 ductal carcinoma in situ (DCIS), and 6 rare tumors. Deep learning model was built up and fine-tuned in 10,357 D-FFOCT patches. Subsequently, from June 22 to August 17, 2018, independent tests (n = 42) were conducted on 10 benign lesions, 29 IDC, 1 DCIS, and 2 rare tumors. The model yielded excellent performance, with an accuracy of 97.62%, sensitivity of 96.88% and specificity of 100%; only one IDC was misclassified. Meanwhile, the acquisition of the D-FFOCT images was non-destructive and did not require any tissue preparation or staining procedures. In the simulated intraoperative margin evaluation procedure, the time required for our novel workflow (approximately 3 min) was significantly shorter than that required for traditional procedures (approximately 30 min). These findings indicate that the combination of D-FFOCT and deep learning algorithms can streamline intraoperative cancer diagnosis independently of traditional pathology laboratory procedures.
Collapse
MESH Headings
- Humans
- Breast Neoplasms/diagnostic imaging
- Breast Neoplasms/surgery
- Breast Neoplasms/pathology
- Tomography, Optical Coherence/methods
- Deep Learning
- Female
- Prospective Studies
- Middle Aged
- Carcinoma, Ductal, Breast/diagnostic imaging
- Carcinoma, Ductal, Breast/surgery
- Carcinoma, Ductal, Breast/pathology
- Aged
- Adult
- Carcinoma, Intraductal, Noninfiltrating/diagnostic imaging
- Carcinoma, Intraductal, Noninfiltrating/surgery
- Carcinoma, Intraductal, Noninfiltrating/pathology
- Intraoperative Period
Collapse
Affiliation(s)
- Shuwei Zhang
- Breast Center, Peking University People's Hospital, Beijing 100044, China
| | - Bin Yang
- China ESG Institute, Capital University of Economics and Business, Beijing 100070, China; Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Houpu Yang
- Breast Center, Peking University People's Hospital, Beijing 100044, China
| | - Jin Zhao
- Breast Center, Peking University People's Hospital, Beijing 100044, China
| | - Yuanyuan Zhang
- Department of Pathology, Peking University People's Hospital, Beijing 100044, China
| | - Yuanxu Gao
- Center for Biomedicine and Innovations, Faculty of Medicine, Macau University of Science and Technology, Macao 999078, China
| | - Olivia Monteiro
- Center for Biomedicine and Innovations, Faculty of Medicine, Macau University of Science and Technology, Macao 999078, China
| | - Kang Zhang
- Center for Biomedicine and Innovations, Faculty of Medicine, Macau University of Science and Technology, Macao 999078, China; College of Future Technology, Peking University, Beijing 100091, China.
| | - Bo Liu
- School of Mathematical and Computational Sciences, Massey University, Auckland 0745, New Zealand.
| | - Shu Wang
- Breast Center, Peking University People's Hospital, Beijing 100044, China.
| |
Collapse
|
2
|
Zhang X, Liu C, Zhu H, Wang T, Du Z, Ding W. A universal multiple instance learning framework for whole slide image analysis. Comput Biol Med 2024; 178:108714. [PMID: 38889627 DOI: 10.1016/j.compbiomed.2024.108714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 06/04/2024] [Accepted: 06/04/2024] [Indexed: 06/20/2024]
Abstract
BACKGROUND The emergence of digital whole slide image (WSI) has driven the development of computational pathology. However, obtaining patch-level annotations is challenging and time-consuming due to the high resolution of WSI, which limits the applicability of fully supervised methods. We aim to address the challenges related to patch-level annotations. METHODS We propose a universal framework for weakly supervised WSI analysis based on Multiple Instance Learning (MIL). To achieve effective aggregation of instance features, we design a feature aggregation module from multiple dimensions by considering feature distribution, instances correlation and instance-level evaluation. First, we implement instance-level standardization layer and deep projection unit to improve the separation of instances in the feature space. Then, a self-attention mechanism is employed to explore dependencies between instances. Additionally, an instance-level pseudo-label evaluation method is introduced to enhance the available information during the weak supervision process. Finally, a bag-level classifier is used to obtain preliminary WSI classification results. To achieve even more accurate WSI label predictions, we have designed a key instance selection module that strengthens the learning of local features for instances. Combining the results from both modules leads to an improvement in WSI prediction accuracy. RESULTS Experiments conducted on Camelyon16, TCGA-NSCLC, SICAPv2, PANDA and classical MIL benchmark datasets demonstrate that our proposed method achieves a competitive performance compared to some recent methods, with maximum improvement of 14.6 % in terms of classification accuracy. CONCLUSION Our method can improve the classification accuracy of whole slide images in a weakly supervised way, and more accurately detect lesion areas.
Collapse
Affiliation(s)
- Xueqin Zhang
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China; Shanghai Key Laboratory of Computer Software Evaluating and Testing, Shanghai 201112, China
| | - Chang Liu
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China.
| | - Huitong Zhu
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China
| | - Tianqi Wang
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China
| | - Zunguo Du
- Department of Pathology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Weihong Ding
- Department of Urology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China.
| |
Collapse
|
3
|
Chen Z, Wong IHM, Dai W, Lo CTK, Wong TTW. Lung Cancer Diagnosis on Virtual Histologically Stained Tissue Using Weakly Supervised Learning. Mod Pathol 2024; 37:100487. [PMID: 38588884 DOI: 10.1016/j.modpat.2024.100487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Revised: 03/05/2024] [Accepted: 03/30/2024] [Indexed: 04/10/2024]
Abstract
Lung adenocarcinoma (LUAD) is the most common primary lung cancer and accounts for 40% of all lung cancer cases. The current gold standard for lung cancer analysis is based on the pathologists' interpretation of hematoxylin and eosin (H&E)-stained tissue slices viewed under a brightfield microscope or a digital slide scanner. Computational pathology using deep learning has been proposed to detect lung cancer on histology images. However, the histological staining workflow to acquire the H&E-stained images and the subsequent cancer diagnosis procedures are labor-intensive and time-consuming with tedious sample preparation steps and repetitive manual interpretation, respectively. In this work, we propose a weakly supervised learning method for LUAD classification on label-free tissue slices with virtual histological staining. The autofluorescence images of label-free tissue with histopathological information can be converted into virtual H&E-stained images by a weakly supervised deep generative model. For the downstream LUAD classification task, we trained the attention-based multiple-instance learning model with different settings on the open-source LUAD H&E-stained whole-slide images (WSIs) dataset from the Cancer Genome Atlas (TCGA). The model was validated on the 150 H&E-stained WSIs collected from patients in Queen Mary Hospital and Prince of Wales Hospital with an average area under the curve (AUC) of 0.961. The model also achieved an average AUC of 0.973 on 58 virtual H&E-stained WSIs, comparable to the results on 58 standard H&E-stained WSIs with an average AUC of 0.977. The attention heatmaps of virtual H&E-stained WSIs and ground-truth H&E-stained WSIs can indicate tumor regions of LUAD tissue slices. In conclusion, the proposed diagnostic workflow on virtual H&E-stained WSIs of label-free tissue is a rapid, cost effective, and interpretable approach to assist clinicians in postoperative pathological examinations. The method could serve as a blueprint for other label-free imaging modalities and disease contexts.
Collapse
Affiliation(s)
- Zhenghui Chen
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Ivy H M Wong
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Weixing Dai
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Claudia T K Lo
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Terence T W Wong
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China.
| |
Collapse
|
4
|
Ma M, Zeng X, Qu L, Sheng X, Ren H, Chen W, Li B, You Q, Xiao L, Wang Y, Dai M, Zhang B, Lu C, Sheng W, Huang D. Advancing Automatic Gastritis Diagnosis: An Interpretable Multilabel Deep Learning Framework for the Simultaneous Assessment of Multiple Indicators. THE AMERICAN JOURNAL OF PATHOLOGY 2024:S0002-9440(24)00175-5. [PMID: 38762117 DOI: 10.1016/j.ajpath.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 03/17/2024] [Accepted: 04/26/2024] [Indexed: 05/20/2024]
Abstract
The evaluation of morphologic features, such as inflammation, gastric atrophy, and intestinal metaplasia, is crucial for diagnosing gastritis. However, artificial intelligence analysis for nontumor diseases like gastritis is limited. Previous deep learning models have omitted important morphologic indicators and cannot simultaneously diagnose gastritis indicators or provide interpretable labels. To address this, an attention-based multi-instance multilabel learning network (AMMNet) was developed to simultaneously achieve the multilabel diagnosis of activity, atrophy, and intestinal metaplasia with only slide-level weak labels. To evaluate AMMNet's real-world performance, a diagnostic test was designed to observe improvements in junior pathologists' diagnostic accuracy and efficiency with and without AMMNet assistance. In this study of 1096 patients from seven independent medical centers, AMMNet performed well in assessing activity [area under the curve (AUC), 0.93], atrophy (AUC, 0.97), and intestinal metaplasia (AUC, 0.93). The false-negative rates of these indicators were only 0.04, 0.08, and 0.18, respectively, and junior pathologists had lower false-negative rates with model assistance (0.15 versus 0.10). Furthermore, AMMNet reduced the time required per whole slide image from 5.46 to only 2.85 minutes, enhancing diagnostic efficiency. In block-level clustering analysis, AMMNet effectively visualized task-related patches within whole slide images, improving interpretability. These findings highlight AMMNet's effectiveness in accurately evaluating gastritis morphologic indicators on multicenter data sets. Using multi-instance multilabel learning strategies to support routine diagnostic pathology deserves further evaluation.
Collapse
Affiliation(s)
- Mengke Ma
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China
| | - Xixi Zeng
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China
| | - Linhao Qu
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China
| | - Xia Sheng
- Department of Pathology, Minhang Hospital, Fudan University, Shanghai, China
| | - Hongzheng Ren
- Department of Pathology, Gongli Hospital, Naval Medical University, Shanghai, China
| | - Weixiang Chen
- Department of Pathology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Bin Li
- Department of Pathology, Shanghai Xu-Hui Central Hospital, Shanghai, China
| | - Qinghua You
- Department of Pathology, Shanghai Pudong Hospital, Fudan University Pudong Medical Center, Shanghai, China
| | - Li Xiao
- Department of Pathology, Huadong Hospital, Shanghai, China
| | - Yi Wang
- Information Center, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Mei Dai
- Information Center, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Boqiang Zhang
- Shanghai Foremost Medical Technology Co Ltd, Shanghai, China
| | - Changqing Lu
- Shanghai Foremost Medical Technology Co Ltd, Shanghai, China
| | - Weiqi Sheng
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China.
| | - Dan Huang
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China.
| |
Collapse
|
5
|
P L R, K S G. Revolutionizing dementia detection: Leveraging vision and Swin transformers for early diagnosis. Am J Med Genet B Neuropsychiatr Genet 2024:e32979. [PMID: 38619385 DOI: 10.1002/ajmg.b.32979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 02/01/2024] [Accepted: 03/04/2024] [Indexed: 04/16/2024]
Abstract
Dementia, an increasingly prevalent neurological disorder with a projected threefold rise globally by 2050, necessitates early detection for effective management. The risk notably increases after age 65. Dementia leads to a progressive decline in cognitive functions, affecting memory, reasoning, and problem-solving abilities. This decline can impact the individual's ability to perform daily tasks and make decisions, underscoring the crucial importance of timely identification. With the advent of technologies like computer vision and deep learning, the prospect of early detection becomes even more promising. Employing sophisticated algorithms on imaging data, such as positron emission tomography scans, facilitates the recognition of subtle structural brain changes, enabling diagnosis at an earlier stage for potentially more effective interventions. In an experimental study, the Swin transformer algorithm demonstrated superior overall accuracy compared to the vision transformer and convolutional neural network, emphasizing its efficiency. Detecting dementia early is essential for proactive management, personalized care, and implementing preventive measures, ultimately enhancing outcomes for individuals and lessening the overall burden on healthcare systems.
Collapse
Affiliation(s)
- Rini P L
- Department of Information Technology, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam, India
| | - Gayathri K S
- Department of Information Technology, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam, India
| |
Collapse
|
6
|
Du F, Zhou H, Niu Y, Han Z, Sui X. Transformaer-based model for lung adenocarcinoma subtypes. Med Phys 2024. [PMID: 38427790 DOI: 10.1002/mp.17006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/27/2024] [Accepted: 01/27/2024] [Indexed: 03/03/2024] Open
Abstract
BACKGROUND Lung cancer has the highest morbidity and mortality rate among all types of cancer. Histological subtypes serve as crucial markers for the development of lung cancer and possess significant clinical values for cancer diagnosis, prognosis, and prediction of treatment responses. However, existing studies only dichotomize normal and cancerous tissues, failing to capture the unique characteristics of tissue sections and cancer types. PURPOSE Therefore, we have pioneered the classification of lung adenocarcinoma (LAD) cancer tissues into five subtypes (acinar, lepidic, micropapillary, papillary, and solid) based on section data in whole-slide image sections. In addition, a novel model called HybridNet was designed to improve the classification performance. METHODS HybridNet primarily consists of two interactive streams: a Transformer and a convolutional neural network (CNN). The Transformer stream captures rich global representations using a self-attention mechanism, while the CNN stream extracts local semantic features to optimize image details. Specifically, during the dual-stream parallelism, the feature maps of the Transformer stream as weights are weighted and summed with those of the CNN stream backbone; at the end of the parallelism, the respective final features are concatenated to obtain more discriminative semantic information. RESULTS Experimental results on a private dataset of LAD showed that HybridNet achieved 95.12% classification accuracy, and the accuracy of five histological subtypes (acinar, lepidic, micropapillary, papillary, and solid) reached 94.5%, 97.1%, 94%, 91%, and 99% respectively; the experimental results on the public BreakHis dataset show that HybridNet achieves the best results in three evaluation metrics: accuracy, recall and F1-score, with 92.40%, 90.63%, and 91.43%, respectively. CONCLUSIONS The process of classifying LAD into five subtypes assists pathologists in selecting appropriate treatments and enables them to predict tumor mutation burden (TMB) and analyze the spatial distribution of immune checkpoint proteins based on this and other clinical data. In addition, the proposed HybridNet fuses CNN and Transformer information several times and is able to improve the accuracy of subtype classification, and also shows satisfactory performance on public datasets with some generalization ability.
Collapse
Affiliation(s)
- Fawen Du
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, China
| | - Huiyu Zhou
- School of Computing and Mathematic Sciences, University of Leicester, Leicester, UK
| | - Yi Niu
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, China
| | - Zeyu Han
- School of Mathematics and Statistics, Shandong University, Weihai, China
| | - Xiaodan Sui
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, China
| |
Collapse
|
7
|
Gardner W, Winkler DA, Bamford SE, Muir BW, Pigram PJ. Markedly Enhanced Analysis of Mass Spectrometry Images Using Weakly Supervised Machine Learning. SMALL METHODS 2024:e2301230. [PMID: 38204217 DOI: 10.1002/smtd.202301230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/03/2023] [Indexed: 01/12/2024]
Abstract
Supervised and unsupervised machine learning algorithms are routinely applied to time-of-flight secondary ion mass spectrometry (ToF-SIMS) imaging data and, more broadly, to mass spectrometry imaging (MSI). These algorithms have accelerated large-scale, single-pixel analysis, classification, and regression. However, there is relatively little research on methods suited for so-called weakly supervised problems, where ground-truth class labels exist at the image level, but not at the individual pixel level. Unsupervised learning methods are usually applied to these problems. However, these methods cannot make use of available labels. Here a novel method specifically designed for weakly supervised MSI data is presented. A dual-stream multiple instance learning (MIL) approach is adapted from computational pathology that reveals the spatial-spectral characteristics distinguishing different classes of MSI images. The method uses an information entropy-regularized attention mechanism to identify characteristic class pixels that are then used to extract characteristic mass spectra. This work provides a proof-of-concept exemplification using printed ink samples imaged by ToF-SIMS. A second application-oriented study is also presented, focusing on the analysis of a mixed powder sample type. Results demonstrate the potential of the MIL method for broader application in MSI, with implications for understanding subtle spatial-spectral characteristics in various applications and contexts.
Collapse
Affiliation(s)
- Wil Gardner
- Centre for Materials and Surface Science and Department of Mathematical and Physical Sciences, La Trobe University, Bundoora, Victoria, 3086, Australia
| | - David A Winkler
- Department of Biochemistry and Chemistry, La Trobe Institute for Molecular Sciences, La Trobe University, Melbourne, Victoria, 3086, Australia
- Monash Institute of Pharmaceutical Sciences, Monash University, Parkville, Victoria, 3052, Australia
- Advanced Materials and Healthcare Technologies, School of Pharmacy, University of Nottingham, Nottingham, NG7 2RD, UK
| | - Sarah E Bamford
- Centre for Materials and Surface Science and Department of Mathematical and Physical Sciences, La Trobe University, Bundoora, Victoria, 3086, Australia
| | | | - Paul J Pigram
- Centre for Materials and Surface Science and Department of Mathematical and Physical Sciences, La Trobe University, Bundoora, Victoria, 3086, Australia
| |
Collapse
|
8
|
Mukashyaka P, Sheridan TB, Foroughi Pour A, Chuang JH. SAMPLER: unsupervised representations for rapid analysis of whole slide tissue images. EBioMedicine 2024; 99:104908. [PMID: 38101298 PMCID: PMC10733087 DOI: 10.1016/j.ebiom.2023.104908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 11/27/2023] [Accepted: 11/27/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND Deep learning has revolutionized digital pathology, allowing automatic analysis of hematoxylin and eosin (H&E) stained whole slide images (WSIs) for diverse tasks. WSIs are broken into smaller images called tiles, and a neural network encodes each tile. Many recent works use supervised attention-based models to aggregate tile-level features into a slide-level representation, which is then used for downstream analysis. Training supervised attention-based models is computationally intensive, architecture optimization of the attention module is non-trivial, and labeled data are not always available. Therefore, we developed an unsupervised and fast approach called SAMPLER to generate slide-level representations. METHODS Slide-level representations of SAMPLER are generated by encoding the cumulative distribution functions of multiscale tile-level features. To assess effectiveness of SAMPLER, slide-level representations of breast carcinoma (BRCA), non-small cell lung carcinoma (NSCLC), and renal cell carcinoma (RCC) WSIs of The Cancer Genome Atlas (TCGA) were used to train separate classifiers distinguishing tumor subtypes in FFPE and frozen WSIs. In addition, BRCA and NSCLC classifiers were externally validated on frozen WSIs. Moreover, SAMPLER's attention maps identify regions of interest, which were evaluated by a pathologist. To determine time efficiency of SAMPLER, we compared runtime of SAMPLER with two attention-based models. SAMPLER concepts were used to improve the design of a context-aware multi-head attention model (context-MHA). FINDINGS SAMPLER-based classifiers were comparable to state-of-the-art attention deep learning models to distinguish subtypes of BRCA (AUC = 0.911 ± 0.029), NSCLC (AUC = 0.940 ± 0.018), and RCC (AUC = 0.987 ± 0.006) on FFPE WSIs (internal test sets). However, training SAMLER-based classifiers was >100 times faster. SAMPLER models successfully distinguished tumor subtypes on both internal and external test sets of frozen WSIs. Histopathological review confirmed that SAMPLER-identified high attention tiles contained subtype-specific morphological features. The improved context-MHA distinguished subtypes of BRCA and RCC (BRCA-AUC = 0.921 ± 0.027, RCC-AUC = 0.988 ± 0.010) with increased accuracy on internal test FFPE WSIs. INTERPRETATION Our unsupervised statistical approach is fast and effective for analyzing WSIs, with greatly improved scalability over attention-based deep learning methods. The high accuracy of SAMPLER-based classifiers and interpretable attention maps suggest that SAMPLER successfully encodes the distinct morphologies within WSIs and will be applicable to general histology image analysis problems. FUNDING This study was supported by the National Cancer Institute (Grant No. R01CA230031 and P30CA034196).
Collapse
Affiliation(s)
- Patience Mukashyaka
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA; Department of Genetics and Genome Sciences, University of Connecticut Health Center, Farmington, CT, USA
| | - Todd B Sheridan
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA; Department of Pathology, Hartford Hospital, Hartford, CT, USA
| | | | - Jeffrey H Chuang
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA; Department of Genetics and Genome Sciences, University of Connecticut Health Center, Farmington, CT, USA.
| |
Collapse
|
9
|
Atabansi CC, Nie J, Liu H, Song Q, Yan L, Zhou X. A survey of Transformer applications for histopathological image analysis: New developments and future directions. Biomed Eng Online 2023; 22:96. [PMID: 37749595 PMCID: PMC10518923 DOI: 10.1186/s12938-023-01157-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 09/15/2023] [Indexed: 09/27/2023] Open
Abstract
Transformers have been widely used in many computer vision challenges and have shown the capability of producing better results than convolutional neural networks (CNNs). Taking advantage of capturing long-range contextual information and learning more complex relations in the image data, Transformers have been used and applied to histopathological image processing tasks. In this survey, we make an effort to present a thorough analysis of the uses of Transformers in histopathological image analysis, covering several topics, from the newly built Transformer models to unresolved challenges. To be more precise, we first begin by outlining the fundamental principles of the attention mechanism included in Transformer models and other key frameworks. Second, we analyze Transformer-based applications in the histopathological imaging domain and provide a thorough evaluation of more than 100 research publications across different downstream tasks to cover the most recent innovations, including survival analysis and prediction, segmentation, classification, detection, and representation. Within this survey work, we also compare the performance of CNN-based techniques to Transformers based on recently published papers, highlight major challenges, and provide interesting future research directions. Despite the outstanding performance of the Transformer-based architectures in a number of papers reviewed in this survey, we anticipate that further improvements and exploration of Transformers in the histopathological imaging domain are still required in the future. We hope that this survey paper will give readers in this field of study a thorough understanding of Transformer-based techniques in histopathological image analysis, and an up-to-date paper list summary will be provided at https://github.com/S-domain/Survey-Paper .
Collapse
Affiliation(s)
| | - Jing Nie
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| | - Haijun Liu
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| | - Qianqian Song
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| | - Lingfeng Yan
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| | - Xichuan Zhou
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| |
Collapse
|
10
|
Iqbal S, Qureshi AN, Alhussein M, Aurangzeb K, Kadry S. A Novel Heteromorphous Convolutional Neural Network for Automated Assessment of Tumors in Colon and Lung Histopathology Images. Biomimetics (Basel) 2023; 8:370. [PMID: 37622975 PMCID: PMC10452605 DOI: 10.3390/biomimetics8040370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 07/31/2023] [Accepted: 08/03/2023] [Indexed: 08/26/2023] Open
Abstract
The automated assessment of tumors in medical image analysis encounters challenges due to the resemblance of colon and lung tumors to non-mitotic nuclei and their heteromorphic characteristics. An accurate assessment of tumor nuclei presence is crucial for determining tumor aggressiveness and grading. This paper proposes a new method called ColonNet, a heteromorphous convolutional neural network (CNN) with a feature grafting methodology categorically configured for analyzing mitotic nuclei in colon and lung histopathology images. The ColonNet model consists of two stages: first, identifying potential mitotic patches within the histopathological imaging areas, and second, categorizing these patches into squamous cell carcinomas, adenocarcinomas (lung), benign (lung), benign (colon), and adenocarcinomas (colon) based on the model's guidelines. We develop and employ our deep CNNs, each capturing distinct structural, textural, and morphological properties of tumor nuclei, to construct the heteromorphous deep CNN. The execution of the proposed ColonNet model is analyzed by its comparison with state-of-the-art CNNs. The results demonstrate that our model surpasses others on the test set, achieving an impressive F1 score of 0.96, sensitivity and specificity of 0.95, and an area under the accuracy curve of 0.95. These outcomes underscore our hybrid model's superior performance, excellent generalization, and accuracy, highlighting its potential as a valuable tool to support pathologists in diagnostic activities.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore 54000, Pakistan;
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore 54000, Pakistan;
| | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia; (M.A.); (K.A.)
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia; (M.A.); (K.A.)
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway;
| |
Collapse
|