1
|
Zhang H, Feng Y, Zhang J, Li G, Wu J, Ji D. TDT-MIL: a framework with a dual-channel spatial positional encoder for weakly-supervised whole slide image classification. BIOMEDICAL OPTICS EXPRESS 2024; 15:5831-5855. [PMID: 39421761 PMCID: PMC11482175 DOI: 10.1364/boe.530534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 08/05/2024] [Accepted: 08/30/2024] [Indexed: 10/19/2024]
Abstract
The classic multiple instance learning (MIL) paradigm is harnessed for weakly-supervised whole slide image (WSI) classification. The spatial position relationship located between positive tissues is crucial for this task due to the small percentage of these tissues in billions of pixels, which has been overlooked by most studies. Therefore, we propose a framework called TDT-MIL. We first serially connect a convolutional neural network and transformer for basic feature extraction. Then, a novel dual-channel spatial positional encoder (DCSPE) module is designed to simultaneously capture the complementary local and global positional information between instances. To further supplement the spatial position relationship, we construct a convolutional triple-attention (CTA) module to attend to the inter-channel information. Thus, the spatial positional and inter-channel information is fully mined by our model to characterize the key pathological semantics in WSI. We evaluated TDT-MIL on two publicly available datasets, including CAMELYON16 and TCGA-NSCLC, with the corresponding classification accuracy and AUC up to 91.54%, 94.96%, and 90.21%, 94.36%, respectively, outperforming state-of-the-art baselines. More importantly, our model possesses a satisfactory capability in solving the imbalanced WSI classification task using an ingenious but interpretable structure.
Collapse
Affiliation(s)
- Hongbin Zhang
- School of Information and Software Engineering, East China Jiaotong University, China
| | - Ya Feng
- School of Information and Software Engineering, East China Jiaotong University, China
| | - Jin Zhang
- School of Information and Software Engineering, East China Jiaotong University, China
| | - Guangli Li
- School of Information and Software Engineering, East China Jiaotong University, China
| | - Jianguo Wu
- The Second Affiliated Hospital of Nanchang University, China
| | - Donghong Ji
- Cyber Science and Engineering School, Wuhan University, China
| |
Collapse
|
2
|
Liu W, Zhang B, Liu T, Jiang J, Liu Y. Artificial Intelligence in Pancreatic Image Analysis: A Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:4749. [PMID: 39066145 PMCID: PMC11280964 DOI: 10.3390/s24144749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 07/15/2024] [Accepted: 07/16/2024] [Indexed: 07/28/2024]
Abstract
Pancreatic cancer is a highly lethal disease with a poor prognosis. Its early diagnosis and accurate treatment mainly rely on medical imaging, so accurate medical image analysis is especially vital for pancreatic cancer patients. However, medical image analysis of pancreatic cancer is facing challenges due to ambiguous symptoms, high misdiagnosis rates, and significant financial costs. Artificial intelligence (AI) offers a promising solution by relieving medical personnel's workload, improving clinical decision-making, and reducing patient costs. This study focuses on AI applications such as segmentation, classification, object detection, and prognosis prediction across five types of medical imaging: CT, MRI, EUS, PET, and pathological images, as well as integrating these imaging modalities to boost diagnostic accuracy and treatment efficiency. In addition, this study discusses current hot topics and future directions aimed at overcoming the challenges in AI-enabled automated pancreatic cancer diagnosis algorithms.
Collapse
Affiliation(s)
- Weixuan Liu
- Sydney Smart Technology College, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China; (W.L.); (B.Z.)
| | - Bairui Zhang
- Sydney Smart Technology College, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China; (W.L.); (B.Z.)
| | - Tao Liu
- School of Mathematics and Statistics, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China;
| | - Juntao Jiang
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310058, China
| | - Yong Liu
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310058, China
| |
Collapse
|
3
|
Zhang D, Duan C, Anazodo U, Wang ZJ, Lou X. Self-supervised anatomical continuity enhancement network for 7T SWI synthesis from 3T SWI. Med Image Anal 2024; 95:103184. [PMID: 38723320 DOI: 10.1016/j.media.2024.103184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 03/13/2024] [Accepted: 04/18/2024] [Indexed: 06/01/2024]
Abstract
Synthesizing 7T Susceptibility Weighted Imaging (SWI) from 3T SWI could offer significant clinical benefits by combining the high sensitivity of 7T SWI for neurological disorders with the widespread availability of 3T SWI in diagnostic routines. Although methods exist for synthesizing 7T Magnetic Resonance Imaging (MRI), they primarily focus on traditional MRI modalities like T1-weighted imaging, rather than SWI. SWI poses unique challenges, including limited data availability and the invisibility of certain tissues in individual 3T SWI slices. To address these challenges, we propose a Self-supervised Anatomical Continuity Enhancement (SACE) network to synthesize 7T SWI from 3T SWI using plentiful 3T SWI data and limited 3T-7T paired data. The SACE employs two specifically designed pretext tasks to utilize low-level representations from abundant 3T SWI data for assisting 7T SWI synthesis in a downstream task with limited paired data. One pretext task emphasizes input-specific morphology by balancing the elimination of redundant patterns with the preservation of essential morphology, preventing the blurring of synthetic 7T SWI images. The other task improves the synthesis of tissues that are invisible in a single 3T SWI slice by aligning adjacent slices with the current slice and predicting their difference fields. The downstream task innovatively combines clinical knowledge with brain substructure diagrams to selectively enhance clinically relevant features. When evaluated on a dataset comprising 97 cases (5495 slices), the proposed method achieved a Peak Signal-to-Noise Ratio (PSNR) of 23.05 dB and a Structural Similarity Index (SSIM) of 0.688. Due to the absence of specific methods for 7T SWI, our method was compared with existing enhancement techniques for general 7T MRI synthesis, outperforming these techniques in the context of 7T SWI synthesis. Clinical evaluations have shown that our synthetic 7T SWI is clinically effective, demonstrating its potential as a clinical tool.
Collapse
Affiliation(s)
- Dong Zhang
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Caohui Duan
- Department of Radiology, Chinese PLA General Hospital, Beijing, China
| | - Udunna Anazodo
- Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| | - Z Jane Wang
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Xin Lou
- Department of Radiology, Chinese PLA General Hospital, Beijing, China.
| |
Collapse
|
4
|
Lin H, Zou J, Wang K, Feng Y, Xu C, Lyu J, Qin J. Dual-space high-frequency learning for transformer-based MRI super-resolution. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108165. [PMID: 38631131 DOI: 10.1016/j.cmpb.2024.108165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 03/28/2024] [Accepted: 04/05/2024] [Indexed: 04/19/2024]
Abstract
BACKGROUND AND OBJECTIVE Magnetic resonance imaging (MRI) can provide rich and detailed high-contrast information of soft tissues, while the scanning of MRI is time-consuming. To accelerate MR imaging, a variety of Transformer-based single image super-resolution methods are proposed in recent years, achieving promising results thanks to their superior capability of capturing long-range dependencies. Nevertheless, most existing works prioritize the design of transformer attention blocks to capture global information. The local high-frequency details, which are pivotal to faithful MRI restoration, are unfortunately neglected. METHODS In this work, we propose a high-frequency enhanced learning scheme to effectively improve the awareness of high frequency information in current Transformer-based MRI single image super-resolution methods. Specifically, we present two entirely plug-and-play modules designed to equip Transformer-based networks with the ability to recover high-frequency details from dual spaces: 1) in the feature space, we design a high-frequency block (Hi-Fe block) paralleled with Transformer-based attention layers to extract rich high-frequency features; while 2) in the image intensity space, we tailor a high-frequency amplification module (HFA) to further refine the high-frequency details. By fully exploiting the merits of the two modules, our framework can recover abundant and diverse high-frequency information, rendering faithful MRI super-resolved results with fine details. RESULTS We integrated our modules with six Transformer-based models and conducted experiments across three datasets. The results indicate that our plug-and-play modules can enhance the super-resolution performance of all foundational models to varying degrees, surpassing the capabilities of existing state-of-the-art single image super-resolution networks. CONCLUSION Comprehensive comparison of super-resolution images and high-frequency maps from various methods, clearly demonstrating that our module possesses the capability to restore high-frequency information, showing huge potential in clinical practice for accelerated MRI reconstruction.
Collapse
Affiliation(s)
- Haoneng Lin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Jing Zou
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong.
| | - Kang Wang
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Yidan Feng
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Cheng Xu
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Jun Lyu
- Brigham and Women's Hospital, Harvard Medical School, Boston, United States
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| |
Collapse
|
5
|
Liu Z, Han J, Liu J, Li ZC, Zhai G. Neighborhood evaluator for efficient super-resolution reconstruction of 2D medical images. Comput Biol Med 2024; 171:108212. [PMID: 38422967 DOI: 10.1016/j.compbiomed.2024.108212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 02/02/2024] [Accepted: 02/25/2024] [Indexed: 03/02/2024]
Abstract
BACKGROUND Deep learning-based super-resolution (SR) algorithms aim to reconstruct low-resolution (LR) images into high-fidelity high-resolution (HR) images by learning the low- and high-frequency information. Experts' diagnostic requirements are fulfilled in medical application scenarios through the high-quality reconstruction of LR digital medical images. PURPOSE Medical image SR algorithms should satisfy the requirements of arbitrary resolution and high efficiency in applications. However, there is currently no relevant study available. Several SR research on natural images have accomplished the reconstruction of resolutions without limitations. However, these methodologies provide challenges in meeting medical applications due to the large scale of the model, which significantly limits efficiency. Hence, we suggest a highly effective method for reconstructing medical images at any desired resolution. METHODS Statistical features of medical images exhibit greater continuity in the region of neighboring pixels than natural images. Hence, the process of reconstructing medical images is comparatively less challenging. Utilizing this property, we develop a neighborhood evaluator to represent the continuity of the neighborhood while controlling the network's depth. RESULTS The suggested method has superior performance across seven scales of reconstruction, as evidenced by experiments conducted on panoramic radiographs and two external public datasets. Furthermore, the proposed network significantly decreases the parameter count by over 20× and the computational workload by over 10× compared to prior researches. On large-scale reconstruction, the inference speed can be enhanced by over 5×. CONCLUSION The novel proposed SR strategy for medical images performs efficient reconstruction at arbitrary resolution, marking a significant breakthrough in the field. The given scheme facilitates the implementation of SR in mobile medical platforms.
Collapse
Affiliation(s)
- Zijia Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai, 200240, China.
| | - Jing Han
- Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, 639 Zhizaoju RD, Shanghai, 200011, China.
| | - Jiannan Liu
- Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, 639 Zhizaoju RD, Shanghai, 200011, China.
| | - Zhi-Cheng Li
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan RD, Shenzhen, 518055, China.
| | - Guangtao Zhai
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai, 200240, China.
| |
Collapse
|
6
|
Ye J, Kalra S, Miri MS. Cluster-based histopathology phenotype representation learning by self-supervised multi-class-token hierarchical ViT. Sci Rep 2024; 14:3202. [PMID: 38331955 PMCID: PMC10853503 DOI: 10.1038/s41598-024-53361-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 01/31/2024] [Indexed: 02/10/2024] Open
Abstract
Developing a clinical AI model necessitates a significant amount of highly curated and carefully annotated dataset by multiple medical experts, which results in increased development time and costs. Self-supervised learning (SSL) is a method that enables AI models to leverage unlabelled data to acquire domain-specific background knowledge that can enhance their performance on various downstream tasks. In this work, we introduce CypherViT, a cluster-based histo-pathology phenotype representation learning by self-supervised multi-class-token hierarchical Vision Transformer (ViT). CypherViT is a novel backbone that can be integrated into a SSL pipeline, accommodating both coarse and fine-grained feature learning for histopathological images via a hierarchical feature agglomerative attention module with multiple classification (cls) tokens in ViT. Our qualitative analysis showcases that our approach successfully learns semantically meaningful regions of interest that align with morphological phenotypes. To validate the model, we utilize the DINO self-supervised learning (SSL) framework to train CypherViT on a substantial dataset of unlabeled breast cancer histopathological images. This trained model proves to be a generalizable and robust feature extractor for colorectal cancer images. Notably, our model demonstrates promising performance in patch-level tissue phenotyping tasks across four public datasets. The results from our quantitative experiments highlight significant advantages over existing state-of-the-art SSL models and traditional transfer learning methods, such as those relying on ImageNet pre-training.
Collapse
Affiliation(s)
- Jiarong Ye
- Roche Diagnostics Solutions, Santa Clara, CA, USA
| | - Shivam Kalra
- Roche Diagnostics Solutions, Santa Clara, CA, USA.
| | | |
Collapse
|
7
|
Jin L, Sun T, Liu X, Cao Z, Liu Y, Chen H, Ma Y, Zhang J, Zou Y, Liu Y, Shi F, Shen D, Wu J. A multi-center performance assessment for automated histopathological classification and grading of glioma using whole slide images. iScience 2023; 26:108041. [PMID: 37876818 PMCID: PMC10590813 DOI: 10.1016/j.isci.2023.108041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/10/2023] [Accepted: 09/21/2023] [Indexed: 10/26/2023] Open
Abstract
Accurate pathological classification and grading of gliomas is crucial in clinical diagnosis and treatment. The application of deep learning techniques holds promise for automated histological pathology diagnosis. In this study, we collected 733 whole slide images from four medical centers, of which 456 were used for model training, 150 for internal validation, and 127 for multi-center testing. The study includes 5 types of common gliomas. A subtask-guided multi-instance learning image-to-label training pipeline was employed. The pipeline leveraged "patch prompting" for the model to converge with reasonable computational cost. Experiments showed that an overall accuracy of 0.79 in the internal validation dataset. The performance on the multi-center testing dataset showed an overall accuracy to 0.73. The findings suggest a minor yet acceptable performance decrease in multi-center data, demonstrating the model's strong generalizability and establishing a robust foundation for future clinical applications.
Collapse
Affiliation(s)
- Lei Jin
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Huashan Hospital Fudan University, Shanghai 200040, China
| | - Tianyang Sun
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai 200030, China
| | - Xi Liu
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Huashan Hospital Fudan University, Shanghai 200040, China
| | - Zehong Cao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai 200030, China
| | - Yan Liu
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Huashan Hospital Fudan University, Shanghai 200040, China
| | - Hong Chen
- National Center for Neurological Disorders, Huashan Hospital Fudan University, Shanghai 200040, China
- Department of Pathology, Huashan Hospital Fudan University, Shanghai 200040, China
| | - Yixin Ma
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Huashan Hospital Fudan University, Shanghai 200040, China
| | - Jun Zhang
- Wuhan Zhongji Biotechnology Co., Ltd, Wuhan 430206, China
| | - Yaping Zou
- Wuhan Zhongji Biotechnology Co., Ltd, Wuhan 430206, China
| | - Yingchao Liu
- Department of Neurosurgery, The Provincial Hospital Affiliated to Shandong First Medical University, Shandong 250021, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai 200030, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai 200030, China
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
- Shanghai Clinical Research and Trial Center, Shanghai 201210, China
| | - Jinsong Wu
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Huashan Hospital Fudan University, Shanghai 200040, China
| |
Collapse
|
8
|
Xiang H, Shen J, Yan Q, Xu M, Shi X, Zhu X. Multi-scale representation attention based deep multiple instance learning for gigapixel whole slide image analysis. Med Image Anal 2023; 89:102890. [PMID: 37467642 DOI: 10.1016/j.media.2023.102890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/22/2023] [Accepted: 07/03/2023] [Indexed: 07/21/2023]
Abstract
Recently, convolutional neural networks (CNNs) directly using whole slide images (WSIs) for tumor diagnosis and analysis have attracted considerable attention, because they only utilize the slide-level label for model training without any additional annotations. However, it is still a challenging task to directly handle gigapixel WSIs, due to the billions of pixels and intra-variations in each WSI. To overcome this problem, in this paper, we propose a novel end-to-end interpretable deep MIL framework for WSI analysis, by using a two-branch deep neural network and a multi-scale representation attention mechanism to directly extract features from all patches of each WSI. Specifically, we first divide each WSI into bag-, patch- and cell-level images, and then assign the slide-level label to its corresponding bag-level images, so that WSI classification becomes a MIL problem. Additionally, we design a novel multi-scale representation attention mechanism, and embed it into a two-branch deep network to simultaneously mine the bag with a correct label, the significant patches and their cell-level information. Extensive experiments demonstrate the superior performance of the proposed framework over recent state-of-the-art methods, in term of classification accuracy and model interpretability. All source codes are released at: https://github.com/xhangchen/MRAN/.
Collapse
Affiliation(s)
- Hangchen Xiang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Junyi Shen
- Division of Liver Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu, 610044, China
| | - Qingguo Yan
- Department of Pathology Key Laboratory of Resource Biology and Biotechnology in Western China, Ministry of Education, School of Medicine, Northwest University, 229 Taibai North Road, Xi'an 710069, China
| | - Meilian Xu
- School of Electronic Information and Artificial Intelligence, Leshan Normal University, Leshan, 614000, China.
| | - Xiaoshuang Shi
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Xiaofeng Zhu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
9
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
10
|
Pathological image super-resolution using mix-attention generative adversarial network. INT J MACH LEARN CYB 2023. [DOI: 10.1007/s13042-023-01806-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
11
|
MMSRNet: Pathological image super-resolution by multi-task and multi-scale learning. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
12
|
Li B, Nelson MS, Chacko JV, Cudworth N, Eliceiri KW. Hardware-software co-design of an open-source automatic multimodal whole slide histopathology imaging system. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:026501. [PMID: 36761254 PMCID: PMC9905038 DOI: 10.1117/1.jbo.28.2.026501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
Significance Advanced digital control of microscopes and programmable data acquisition workflows have become increasingly important for improving the throughput and reproducibility of optical imaging experiments. Combinations of imaging modalities have enabled a more comprehensive understanding of tissue biology and tumor microenvironments in histopathological studies. However, insufficient imaging throughput and complicated workflows still limit the scalability of multimodal histopathology imaging. Aim We present a hardware-software co-design of a whole slide scanning system for high-throughput multimodal tissue imaging, including brightfield (BF) and laser scanning microscopy. Approach The system can automatically detect regions of interest using deep neural networks in a low-magnification rapid BF scan of the tissue slide and then conduct high-resolution BF scanning and laser scanning imaging on targeted regions with deep learning-based run-time denoising and resolution enhancement. The acquisition workflow is built using Pycro-Manager, a Python package that bridges hardware control libraries of the Java-based open-source microscopy software Micro-Manager in a Python environment. Results The system can achieve optimized imaging settings for both modalities with minimized human intervention and speed up the laser scanning by an order of magnitude with run-time image processing. Conclusions The system integrates the acquisition pipeline and data analysis pipeline into a single workflow that improves the throughput and reproducibility of multimodal histopathological imaging.
Collapse
Affiliation(s)
- Bin Li
- University of Wisconsin–Madison, Center for Quantitative Cell Imaging, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Biomedical Engineering, Madison, Wisconsin, United States
- Morgridge Institute for Research, Madison, Wisconsin, United States
| | - Michael S. Nelson
- University of Wisconsin–Madison, Center for Quantitative Cell Imaging, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Biomedical Engineering, Madison, Wisconsin, United States
| | - Jenu V. Chacko
- University of Wisconsin–Madison, Center for Quantitative Cell Imaging, Madison, Wisconsin, United States
| | - Nathan Cudworth
- University of Wisconsin–Madison, Center for Quantitative Cell Imaging, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Medical Physics, Madison, Wisconsin, United States
| | - Kevin W. Eliceiri
- University of Wisconsin–Madison, Center for Quantitative Cell Imaging, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Biomedical Engineering, Madison, Wisconsin, United States
- Morgridge Institute for Research, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Medical Physics, Madison, Wisconsin, United States
| |
Collapse
|
13
|
Manuel C, Zehnder P, Kaya S, Sullivan R, Hu F. Impact of color augmentation and tissue type in deep learning for hematoxylin and eosin image super resolution. J Pathol Inform 2022; 13:100148. [PMID: 36268062 PMCID: PMC9577134 DOI: 10.1016/j.jpi.2022.100148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 09/23/2022] [Accepted: 09/23/2022] [Indexed: 11/30/2022] Open
Affiliation(s)
| | | | | | | | - Fangyao Hu
- Corresponding author at: Genentech, 1 DNA Way, South San Francisco, CA 94080, USA.
| |
Collapse
|
14
|
Michielli N, Caputo A, Scotto M, Mogetta A, Pennisi OAM, Molinari F, Balmativola D, Bosco M, Gambella A, Metovic J, Tota D, Carpenito L, Gasparri P, Salvi M. Stain normalization in digital pathology: Clinical multi-center evaluation of image quality. J Pathol Inform 2022; 13:100145. [PMID: 36268060 PMCID: PMC9577129 DOI: 10.1016/j.jpi.2022.100145] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 09/14/2022] [Accepted: 09/22/2022] [Indexed: 11/20/2022] Open
Abstract
In digital pathology, the final appearance of digitized images is affected by several factors, resulting in stain color and intensity variation. Stain normalization is an innovative solution to overcome stain variability. However, the validation of color normalization tools has been assessed only from a quantitative perspective, through the computation of similarity metrics between the original and normalized images. To the best of our knowledge, no works investigate the impact of normalization on the pathologist's evaluation. The objective of this paper is to propose a multi-tissue (i.e., breast, colon, liver, lung, and prostate) and multi-center qualitative analysis of a stain normalization tool with the involvement of pathologists with different years of experience. Two qualitative studies were carried out for this purpose: (i) a first study focused on the analysis of the perceived image quality and absence of significant image artifacts after the normalization process; (ii) a second study focused on the clinical score of the normalized image with respect to the original one. The results of the first study prove the high quality of the normalized image with a low impact artifact generation, while the second study demonstrates the superiority of the normalized image with respect to the original one in clinical practice. The normalization process can help both to reduce variability due to tissue staining procedures and facilitate the pathologist in the histological examination. The experimental results obtained in this work are encouraging and can justify the use of a stain normalization tool in clinical routine.
Collapse
Affiliation(s)
- Nicola Michielli
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Alessandro Caputo
- Department of Medicine and Surgery, University Hospital of Salerno, Salerno, Italy
| | - Manuela Scotto
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Alessandro Mogetta
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Orazio Antonino Maria Pennisi
- Technology Transfer and Industrial Liaison Department, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Filippo Molinari
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Davide Balmativola
- Pathology Unit, Humanitas Gradenigo Hospital, Corso Regina Margherita 8, 10153 Turin, Italy
| | - Martino Bosco
- Department of Pathology, Michele and Pietro Ferrero Hospital, 12060 Verduno, Italy
| | - Alessandro Gambella
- Pathology Unit, Department of Medical Sciences, University of Turin, Via Santena 7, 10126 Turin, Italy
| | - Jasna Metovic
- Pathology Unit, Department of Medical Sciences, University of Turin, Via Santena 7, 10126 Turin, Italy
| | - Daniele Tota
- Pathology Unit, Department of Medical Sciences, University of Turin, Via Santena 7, 10126 Turin, Italy
| | - Laura Carpenito
- Department of Pathology, Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy
- University of Milan, Milan, Italy
| | - Paolo Gasparri
- UOC di Anatomia Patologica, ASP Catania P.O. “Gravina”, Caltagirone, Italy
| | - Massimo Salvi
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| |
Collapse
|
15
|
Mehbodniya A, Rao MV, David LG, Joe Nigel KG, Vennam P. Online product sentiment analysis using random evolutionary whale optimization algorithm and deep belief network. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.04.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
16
|
Wang X, Yang S, Zhang J, Wang M, Zhang J, Yang W, Huang J, Han X. Transformer-based unsupervised contrastive learning for histopathological image classification. Med Image Anal 2022; 81:102559. [DOI: 10.1016/j.media.2022.102559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 06/24/2022] [Accepted: 07/25/2022] [Indexed: 10/16/2022]
|
17
|
Li B, Li Y, Eliceiri KW. Dual-stream Multiple Instance Learning Network for Whole Slide Image Classification with Self-supervised Contrastive Learning. CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. WORKSHOPS 2022; 2021:14318-14328. [PMID: 35047230 DOI: 10.1109/cvpr46437.2021.01409] [Citation(s) in RCA: 109] [Impact Index Per Article: 54.5] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
We address the challenging problem of whole slide image (WSI) classification. WSIs have very high resolutions and usually lack localized annotations. WSI classification can be cast as a multiple instance learning (MIL) problem when only slide-level labels are available. We propose a MIL-based method for WSI classification and tumor detection that does not require localized annotations. Our method has three major components. First, we introduce a novel MIL aggregator that models the relations of the instances in a dual-stream architecture with trainable distance measurement. Second, since WSIs can produce large or unbalanced bags that hinder the training of MIL models, we propose to use self-supervised contrastive learning to extract good representations for MIL and alleviate the issue of prohibitive memory cost for large bags. Third, we adopt a pyramidal fusion mechanism for multiscale WSI features, and further improve the accuracy of classification and localization. Our model is evaluated on two representative WSI datasets. The classification accuracy of our model compares favorably to fully-supervised methods, with less than 2% accuracy gap across datasets. Our results also outperform all previous MIL-based methods. Additional benchmark results on standard MIL datasets further demonstrate the superior performance of our MIL aggregator on general MIL problems.
Collapse
Affiliation(s)
- Bin Li
- Department of Biomedical Engineering, University of Wisconsin-Madison.,Morgridge Institute for Research, Madison, WI USA
| | - Yin Li
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison.,Department of Computer Sciences, University of Wisconsin-Madison
| | - Kevin W Eliceiri
- Department of Biomedical Engineering, University of Wisconsin-Madison.,Morgridge Institute for Research, Madison, WI USA.,Department of Medical Physics, University of Wisconsin-Madison
| |
Collapse
|
18
|
Chen X, Yu J, Cheng S, Geng X, Liu S, Han W, Hu J, Chen L, Liu X, Zeng S. An unsupervised style normalization method for cytopathology images. Comput Struct Biotechnol J 2021; 19:3852-3863. [PMID: 34285783 PMCID: PMC8273362 DOI: 10.1016/j.csbj.2021.06.025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 06/10/2021] [Accepted: 06/15/2021] [Indexed: 11/17/2022] Open
Abstract
Diverse styles of cytopathology images have a negative effect on the generalization ability of automated image analysis algorithms. This article proposes an unsupervised method to normalize cytopathology image styles. We design a two-stage style normalization framework with a style removal module to convert the colorful cytopathology image into a gray-scale image with a color-encoding mask and a domain adversarial style reconstruction module to map them back to a colorful image with user-selected style. Our method enforces both hue and structure consistency before and after normalization by using the color-encoding mask and per-pixel regression. Intra-domain and inter-domain adversarial learning are applied to ensure the style of normalized images consistent with the user-selected for input images of different domains. Our method shows superior results against current unsupervised color normalization methods on six cervical cell datasets from different hospitals and scanners. We further demonstrate that our normalization method greatly improves the recognition accuracy of lesion cells on unseen cytopathology images, which is meaningful for model generalization.
Collapse
Affiliation(s)
- Xihao Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Jingya Yu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Shenghua Cheng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xiebo Geng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Sibo Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Wei Han
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Junbo Hu
- Women and Children Hospital of Hubei Province, Wuhan, Hubei, China
| | - Li Chen
- Department of Clinical Laboratory, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xiuli Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, China
| |
Collapse
|
19
|
Cornish TC. Artificial intelligence for automating the measurement of histologic image biomarkers. J Clin Invest 2021; 131:147966. [PMID: 33855974 DOI: 10.1172/jci147966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Artificial intelligence has been applied to histopathology for decades, but the recent increase in interest is attributable to well-publicized successes in the application of deep-learning techniques, such as convolutional neural networks, for image analysis. Recently, generative adversarial networks (GANs) have provided a method for performing image-to-image translation tasks on histopathology images, including image segmentation. In this issue of the JCI, Koyuncu et al. applied GANs to whole-slide images of p16-positive oropharyngeal squamous cell carcinoma (OPSCC) to automate the calculation of a multinucleation index (MuNI) for prognostication in p16-positive OPSCC. Multivariable analysis showed that the MuNI was prognostic for disease-free survival, overall survival, and metastasis-free survival. These results are promising, as they present a prognostic method for p16-positive OPSCC and highlight methods for using deep learning to measure image biomarkers from histopathologic samples in an inherently explainable manner.
Collapse
|