1
|
Zhang G, He Z, Zhang Y, Li Z, Wu L. SC-Net: Symmetrical conical network for colorectal pathology image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108119. [PMID: 38520785 DOI: 10.1016/j.cmpb.2024.108119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/25/2024] [Accepted: 03/04/2024] [Indexed: 03/25/2024]
Abstract
BACKGROUND AND OBJECTIVE Image segmentation of histopathology of colorectal cancer is a core task of computer aided medical image diagnosis system. Existing convolutional neural networks generally extract multi-scale information in linear flow structures by inserting multi-branch modules, which is difficult to extract heterogeneous semantic information under multi-level and different receptive field and tough to establish context dependency among different receptive field features. METHODS To address these issues, we propose a symmetric spiral progressive feature fusion encoder-decoder network called the Symmetric Conical Network (SC-Net). First, we design a Multi-scale Feature Extraction Block (MFEB) matching with the Symmetric Conical Network to obtain multi-branch heterogeneous semantic information under different receptive fields, so as to enrich the diversity of extracted feature information. The encoder is composed of MFEB through spiral and multi-branch arrangement to enhance context dependence between different information flow. Secondly, the information loss of contour, color and others in high-level semantic information through causally stacking MFEB, the Feature Mapping Layer (FML) is designed to map low-level features to high-level semantic features along the down-sampling branch and solve the problem of insufficient global feature extraction in deep levels. RESULTS The SC-Net was evaluated on our self-constructed colorectal cancer dataset, a publicly available breast cancer dataset and a polyp dataset. The results revealed that the mDice of segmentation reached 0.8611, 0.7259 and 0.7144. We compare our model with the state-of-art semantic segmentation UNet++, PSPNet, Attention U-Net, R2U-Net and other advanced segmentation networks. The experimental results demonstrate that we achieve the most advanced performance. CONCLUSIONS The results indicate that the proposed SC-Net excels in segmenting H&E stained pathology images, effectively preserving morphological features and spatial information even in scenarios with weak texture, poor contrast, and variations in appearance.
Collapse
Affiliation(s)
- Gang Zhang
- Faculty of Mechanical and Electrical Engineering, Kunming University of Science and Technology, Kunming 650500, China.
| | - Zifen He
- Faculty of Mechanical and Electrical Engineering, Kunming University of Science and Technology, Kunming 650500, China.
| | - Yinhui Zhang
- Faculty of Mechanical and Electrical Engineering, Kunming University of Science and Technology, Kunming 650500, China.
| | - Zhenhui Li
- Yunnan Cancer Hospital, Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Center, Kunming 650118, China.
| | - Lin Wu
- Yunnan Cancer Hospital, Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Center, Kunming 650118, China.
| |
Collapse
|
2
|
Li Z, Zhang N, Gong H, Qiu R, Zhang W. SG-MIAN: Self-guided multiple information aggregation network for image-level weakly supervised skin lesion segmentation. Comput Biol Med 2024; 170:107988. [PMID: 38232452 DOI: 10.1016/j.compbiomed.2024.107988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 12/11/2023] [Accepted: 01/13/2024] [Indexed: 01/19/2024]
Abstract
Nowadays, skin disease is becoming one of the most malignant diseases that threaten people's health. Computer aided diagnosis based on deep learning has become a widely used technology to assist medical professionals in diagnosis, and segmentation of lesion areas is one of the most important steps in it. However, traditional medical image segmentation methods rely on numerous pixel-level labels for fully supervised training, and such labeling process is time-consuming and requires professional competence. In order to reduce the costs of pixel-level labeling, we proposed a method only using image-level label to segment skin lesion areas. Due to the lack of lesion's spatial and intensity information in image-level labels, and the wide distribution range of irregular shape and different texture on skin lesions, the algorithm must pay great attention to the automatic lesion localization and perception of lesion boundary. In this paper, we proposed a Self-Guided Multiple Information Aggregation Network (SG-MIAN). Our backbone network MIAN utilizes the Multiple Spatial Perceptron (MSP) solely using classification information as guidance to discriminate the key classification features of lesion areas, and thereby performing more accurate localization and activation of lesion areas. Additionally, adjunct to MSP, we also proposed an Auxiliary Activation Structure (AAS) and two auxiliary loss functions to further self-guided boundary correction, achieving the goal of accurate boundary activation. To verify the effectiveness of the proposed method, we conducted extensive experiments using the HAM10000 dataset and the PH2dataset, which demonstrated superior performance compared to most existing weakly supervised segmentation methods.
Collapse
Affiliation(s)
- Zhixun Li
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
| | - Nan Zhang
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
| | - Huiling Gong
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
| | - Ruiyun Qiu
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, China.
| | - Wei Zhang
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
| |
Collapse
|
3
|
Chelebian E, Avenel C, Ciompi F, Wählby C. DEPICTER: Deep representation clustering for histology annotation. Comput Biol Med 2024; 170:108026. [PMID: 38308865 DOI: 10.1016/j.compbiomed.2024.108026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 01/24/2024] [Accepted: 01/24/2024] [Indexed: 02/05/2024]
Abstract
Automatic segmentation of histopathology whole-slide images (WSI) usually involves supervised training of deep learning models with pixel-level labels to classify each pixel of the WSI into tissue regions such as benign or cancerous. However, fully supervised segmentation requires large-scale data manually annotated by experts, which can be expensive and time-consuming to obtain. Non-fully supervised methods, ranging from semi-supervised to unsupervised, have been proposed to address this issue and have been successful in WSI segmentation tasks. But these methods have mainly been focused on technical advancements in algorithmic performance rather than on the development of practical tools that could be used by pathologists or researchers in real-world scenarios. In contrast, we present DEPICTER (Deep rEPresentatIon ClusTERing), an interactive segmentation tool for histopathology annotation that produces a patch-wise dense segmentation map at WSI level. The interactive nature of DEPICTER leverages self- and semi-supervised learning approaches to allow the user to participate in the segmentation producing reliable results while reducing the workload. DEPICTER consists of three steps: first, a pretrained model is used to compute embeddings from image patches. Next, the user selects a number of benign and cancerous patches from the multi-resolution image. Finally, guided by the deep representations, label propagation is achieved using our novel seeded iterative clustering method or by directly interacting with the embedding space via feature space gating. We report both real-time interaction results with three pathologists and evaluate the performance on three public cancer classification dataset benchmarks through simulations. The code and demos of DEPICTER are publicly available at https://github.com/eduardchelebian/depicter.
Collapse
Affiliation(s)
- Eduard Chelebian
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden.
| | - Chirstophe Avenel
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Carolina Wählby
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden
| |
Collapse
|
4
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
5
|
Yang Y, Sun K, Gao Y, Wang K, Yu G. Preparing Data for Artificial Intelligence in Pathology with Clinical-Grade Performance. Diagnostics (Basel) 2023; 13:3115. [PMID: 37835858 PMCID: PMC10572440 DOI: 10.3390/diagnostics13193115] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/15/2023] Open
Abstract
The pathology is decisive for disease diagnosis but relies heavily on experienced pathologists. In recent years, there has been growing interest in the use of artificial intelligence in pathology (AIP) to enhance diagnostic accuracy and efficiency. However, the impressive performance of deep learning-based AIP in laboratory settings often proves challenging to replicate in clinical practice. As the data preparation is important for AIP, the paper has reviewed AIP-related studies in the PubMed database published from January 2017 to February 2022, and 118 studies were included. An in-depth analysis of data preparation methods is conducted, encompassing the acquisition of pathological tissue slides, data cleaning, screening, and subsequent digitization. Expert review, image annotation, dataset division for model training and validation are also discussed. Furthermore, we delve into the reasons behind the challenges in reproducing the high performance of AIP in clinical settings and present effective strategies to enhance AIP's clinical performance. The robustness of AIP depends on a randomized collection of representative disease slides, incorporating rigorous quality control and screening, correction of digital discrepancies, reasonable annotation, and sufficient data volume. Digital pathology is fundamental in clinical-grade AIP, and the techniques of data standardization and weakly supervised learning methods based on whole slide image (WSI) are effective ways to overcome obstacles of performance reproduction. The key to performance reproducibility lies in having representative data, an adequate amount of labeling, and ensuring consistency across multiple centers. Digital pathology for clinical diagnosis, data standardization and the technique of WSI-based weakly supervised learning will hopefully build clinical-grade AIP.
Collapse
Affiliation(s)
- Yuanqing Yang
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Department of Biomedical Engineering, School of Medical, Tsinghua University, Beijing 100084, China
| | - Kai Sun
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Furong Laboratory, Changsha 410013, China
| | - Yanhua Gao
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, Xi’an 710068, China;
| | - Kuansong Wang
- Department of Pathology, School of Basic Medical Sciences, Central South University, Changsha 410013, China;
- Department of Pathology, Xiangya Hospital, Central South University, Changsha 410013, China
| | - Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
| |
Collapse
|
6
|
Pati P, Jaume G, Ayadi Z, Thandiackal K, Bozorgtabar B, Gabrani M, Goksel O. Weakly supervised joint whole-slide segmentation and classification in prostate cancer. Med Image Anal 2023; 89:102915. [PMID: 37633177 DOI: 10.1016/j.media.2023.102915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 05/17/2023] [Accepted: 07/25/2023] [Indexed: 08/28/2023]
Abstract
The identification and segmentation of histological regions of interest can provide significant support to pathologists in their diagnostic tasks. However, segmentation methods are constrained by the difficulty in obtaining pixel-level annotations, which are tedious and expensive to collect for whole-slide images (WSI). Though several methods have been developed to exploit image-level weak-supervision for WSI classification, the task of segmentation using WSI-level labels has received very little attention. The research in this direction typically require additional supervision beyond image labels, which are difficult to obtain in real-world practice. In this study, we propose WholeSIGHT, a weakly-supervised method that can simultaneously segment and classify WSIs of arbitrary shapes and sizes. Formally, WholeSIGHT first constructs a tissue-graph representation of WSI, where the nodes and edges depict tissue regions and their interactions, respectively. During training, a graph classification head classifies the WSI and produces node-level pseudo-labels via post-hoc feature attribution. These pseudo-labels are then used to train a node classification head for WSI segmentation. During testing, both heads simultaneously render segmentation and class prediction for an input WSI. We evaluate the performance of WholeSIGHT on three public prostate cancer WSI datasets. Our method achieves state-of-the-art weakly-supervised segmentation performance on all datasets while resulting in better or comparable classification with respect to state-of-the-art weakly-supervised WSI classification methods. Additionally, we assess the generalization capability of our method in terms of segmentation and classification performance, uncertainty estimation, and model calibration. Our code is available at: https://github.com/histocartography/wholesight.
Collapse
Affiliation(s)
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber/Harvard Cancer Center, Boston, MA, USA
| | - Zeineb Ayadi
- IBM Research Europe, Zurich, Switzerland; EPFL, Lausanne, Switzerland
| | - Kevin Thandiackal
- IBM Research Europe, Zurich, Switzerland; Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland
| | | | | | - Orcun Goksel
- Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland; Department of Information Technology, Uppsala University, Sweden
| |
Collapse
|
7
|
Wang Y, Lin H, Yao N, Chen X, Qiu B, Cui Y, Liu Y, Li B, Han C, Li Z, Zhao W, Wang Z, Pan X, Lu C, Liu J, Liu Z, Liu Z. Computerized tertiary lymphoid structures density on H&E-images is a prognostic biomarker in resectable lung adenocarcinoma. iScience 2023; 26:107635. [PMID: 37664636 PMCID: PMC10474456 DOI: 10.1016/j.isci.2023.107635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 07/17/2023] [Accepted: 08/11/2023] [Indexed: 09/05/2023] Open
Abstract
The increased amount of tertiary lymphoid structures (TLSs) is associated with a favorable prognosis in patients with lung adenocarcinoma (LUAD). However, evaluating TLSs manually is an experience-dependent and time-consuming process, which limits its clinical application. In this multi-center study, we developed an automated computational workflow for quantifying the TLS density in the tumor region of routine hematoxylin and eosin (H&E)-stained whole-slide images (WSIs). The association between the computerized TLS density and disease-free survival (DFS) was further explored in 802 patients with resectable LUAD of three cohorts. Additionally, a Cox proportional hazard regression model, incorporating clinicopathological variables and the TLS density, was established to assess its prognostic ability. The computerized TLS density was an independent prognostic biomarker in patients with resectable LUAD. The integration of the TLS density with clinicopathological variables could support individualized clinical decision-making by improving prognostic stratification.
Collapse
Affiliation(s)
- Yumeng Wang
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
| | - Huan Lin
- Department of Radiology, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
- School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Ningning Yao
- Department of Radiobiology, Shanxi Province Cancer Hospital/Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University; Taiyuan 030013, China
| | - Xiaobo Chen
- First Department of Thoracic Surgery, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Yunnan Cancer Center, Kunming 650118, China
| | - Bingjiang Qiu
- Department of Radiology, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
- Guangdong Cardiovascular Institute, Guangzhou 510080, China
| | - Yanfen Cui
- Department of Radiology, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
- Department of Radiobiology, Shanxi Province Cancer Hospital/Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University; Taiyuan 030013, China
- Guangdong Cardiovascular Institute, Guangzhou 510080, China
| | - Yu Liu
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
| | - Bingbing Li
- Department of Pathology, Ganzhou Hospital of Guangdong Provincial People’s Hospital, Ganzhou Municipal Hospital, 49 Dagong Road, Ganzhou 341000, China
| | - Chu Han
- Department of Radiology, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China
| | - Zhenhui Li
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Yunnan Cancer Center, Kunming 650118, China
| | - Wei Zhao
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China
| | - Zimin Wang
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
| | - Xipeng Pan
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
| | - Cheng Lu
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
- Department of Radiology, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China
| | - Zhenbing Liu
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
| | - Zaiyi Liu
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
- Department of Radiology, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
| |
Collapse
|
8
|
Cai W, Xie L, Yang W, Li Y, Gao Y, Wang T. DFTNet: Dual-Path Feature Transfer Network for Weakly Supervised Medical Image Segmentation. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2530-2540. [PMID: 35951571 DOI: 10.1109/tcbb.2022.3198284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Medical image segmentation has long suffered from the problem of expensive labels. Acquiring pixel-level annotations is time-consuming, labor-intensive, and relies on extensive expert knowledge. Bounding box annotations, in contrast, are relatively easy to acquire. Thus, in this paper, we explore to segment images through a novel Dual-path Feature Transfer design with only bounding box annotations. Specifically, a Target-aware Reconstructor is proposed to extract target-related features by reconstructing the pixels within the bounding box through the channel and spatial attention module. Then, a sliding Feature Fusion and Transfer Module (FFTM) fuses the extracted features from Reconstructor and transfers them to guide the Segmentor for segmentation. Finally, we present the Confidence Ranking Loss (CRLoss) which dynamically assigns weights to the loss of each pixel based on the network's confidence. CRLoss mitigates the impact of inaccurate pseudo-labels on performance. Extensive experiments demonstrate that our proposed model achieves state-of-the-art performance on the Medical Segmentation Decathlon (MSD) Brain Tumour and PROMISE12 datasets.
Collapse
|
9
|
Feng Z, Lin H, Liu Z, Yan L, Wang Y, Li B, Liu E, Han C, Shi Z, Lu C, Liu Z, Pang C, Li Z, Cui Y, Pan X, Chen X. Artificial intelligence-quantified tumour-lymphocyte spatial interaction predicts disease-free survival in resected lung adenocarcinoma: A graph-based, multicentre study. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 238:107617. [PMID: 37235970 DOI: 10.1016/j.cmpb.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 05/01/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE A high degree of lymphocyte infiltration is related to superior outcomes amongst patients with lung adenocarcinoma. Recent evidence indicates that the spatial interactions between tumours and lymphocytes also influence the anti-tumour immune responses, but the spatial analysis at the cellular level remains insufficient. METHODS We proposed an artificial intelligence-quantified Tumour-Lymphocyte Spatial Interaction score (TLSI-score) by calculating the ratio between the number of spatial adjacent tumour-lymphocyte and the number of tumour cells based on topology cell graph constructed using H&E-stained whole-slide images. The association of TLSI-score with disease-free survival (DFS) was explored in 529 patients with lung adenocarcinoma across three independent cohorts (D1, 275; V1, 139; V2, 115). RESULTS After adjusting for pTNM stage and other clinicopathologic risk factors, a higher TLSI-score was independently associated with longer DFS than a low TLSI-score in the three cohorts [D1, adjusted hazard ratio (HR), 0.674; 95% confidence interval (CI) 0.463-0.983; p = 0.040; V1, adjusted HR, 0.408; 95% CI 0.223-0.746; p = 0.004; V2, adjusted HR, 0.294; 95% CI 0.130-0.666; p = 0.003]. By integrating the TLSI-score with clinicopathologic risk factors, the integrated model (full model) improves the prediction of DFS in three independent cohorts (C-index, D1, 0.716 vs. 0.701; V1, 0.666 vs. 0.645; V2, 0.708 vs. 0.662) CONCLUSIONS: TLSI-score shows the second highest relative contribution to the prognostic prediction model, next to the pTNM stage. TLSI-score can assist in the characterising of tumour microenvironment and is expected to promote individualized treatment and follow-up decision-making in clinical practice.
Collapse
Affiliation(s)
- Zhengyun Feng
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, 541004, China
| | - Huan Lin
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; School of Medicine, South China University of Technology, Guangzhou, 510006, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Lixu Yan
- Department of Pathology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, 510080, China
| | - Yumeng Wang
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, 541004, China
| | - Bingbing Li
- Department of Pathology, Guangdong Provincial People's Hospital Ganzhou Hospital (Ganzhou Municipal Hospital), Ganzhou, 341000, China
| | - Entao Liu
- WeiLun PET Center, Department of Nuclear Medicine, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, 510080, China
| | - Chu Han
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Zhenwei Shi
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangzhou, 510080, China
| | - Cheng Lu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Zhenbing Liu
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, 541004, China
| | - Cheng Pang
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, 541004, China
| | - Zhenhui Li
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Yunnan Cancer Centre, Kunming, 650118, China.
| | - Yanfen Cui
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangzhou, 510080, China; Department of Radiology, Shanxi Province Cancer Hospital; Shanxi Hospital Affiliated to Cancer Hospital; Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, 030013, China.
| | - Xipeng Pan
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, 541004, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China.
| | - Xin Chen
- Department of Radiology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, 510180, China.
| |
Collapse
|
10
|
Han Y, Cheng L, Huang G, Zhong G, Li J, Yuan X, Liu H, Li J, Zhou J, Cai M. Weakly supervised semantic segmentation of histological tissue via attention accumulation and pixel-level contrast learning. Phys Med Biol 2023; 68. [PMID: 36577142 DOI: 10.1088/1361-6560/acaeee] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 12/28/2022] [Indexed: 12/29/2022]
Abstract
Objective. Histopathology image segmentation can assist medical professionals in identifying and diagnosing diseased tissue more efficiently. Although fully supervised segmentation models have excellent performance, the annotation cost is extremely expensive. Weakly supervised models are widely used in medical image segmentation due to their low annotation cost. Nevertheless, these weakly supervised models have difficulty in accurately locating the boundaries between different classes of regions in pathological images, resulting in a high rate of false alarms Our objective is to design a weakly supervised segmentation model to resolve the above problems.Approach. The segmentation model is divided into two main stages, the generation of pseudo labels based on class residual attention accumulation network (CRAANet) and the semantic segmentation based on pixel feature space construction network (PFSCNet). CRAANet provides attention scores for each class through the class residual attention module, while the Attention Accumulation (AA) module overlays the attention feature maps generated in each training epoch. PFSCNet employs a network model containing an inflated convolutional residual neural network and a multi-scale feature-aware module as the segmentation backbone, and proposes dense energy loss and pixel clustering modules are based on contrast learning to solve the pseudo-labeling-inaccuracy problem.Main results. We validate our method using the lung adenocarcinoma (LUAD-HistoSeg) dataset and the breast cancer (BCSS) dataset. The results of the experiments show that our proposed method outperforms other state-of-the-art methods on both datasets in several metrics. This suggests that it is capable of performing well in a wide variety of histopathological image segmentation tasks.Significance. We propose a weakly supervised semantic segmentation network that achieves approximate fully supervised segmentation performance even in the case of incomplete labels. The proposed AA and pixel-level contrast learning also make the edges more accurate and can well assist pathologists in their research.
Collapse
Affiliation(s)
- Yongqi Han
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Lianglun Cheng
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Guoheng Huang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Guo Zhong
- School of Information Science and Technology, Guangdong University of Foreign Studies, Guangzhou 510420, People's Republic of China
| | - Jiahua Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Xiaochen Yuan
- Faculty of Applied Sciences, Macao Polytechnic University, Macao 999078, People's Republic of China
| | - Hongrui Liu
- Department of Industrial and Systems Engineering, San Jose State University, CA 95192, United States of America
| | - Jiao Li
- Department of Radiology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, People's Republic of China
| | - Jian Zhou
- Department of Medical Imaging, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, People's Republic of China
| | - Muyan Cai
- Department of Pathology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, People's Republic of China
| |
Collapse
|
11
|
Pan X, Lin H, Han C, Feng Z, Wang Y, Lin J, Qiu B, Yan L, Li B, Xu Z, Wang Z, Zhao K, Liu Z, Liang C, Chen X, Li Z, Cui Y, Lu C, Liu Z. Computerized tumor-infiltrating lymphocytes density score predicts survival of patients with resectable lung adenocarcinoma. iScience 2022; 25:105605. [PMID: 36505920 PMCID: PMC9730047 DOI: 10.1016/j.isci.2022.105605] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 10/23/2022] [Accepted: 11/14/2022] [Indexed: 11/17/2022] Open
Abstract
A high abundance of tumor-infiltrating lymphocytes (TILs) has a positive impact on the prognosis of patients with lung adenocarcinoma (LUAD). We aimed to develop and validate an artificial intelligence-driven pathological scoring system for assessing TILs on H&E-stained whole-slide images of LUAD. Deep learning-based methods were applied to calculate the densities of lymphocytes in cancer epithelium (DLCE) and cancer stroma (DLCS), and a risk score (WELL score) was built through linear weighting of DLCE and DLCS. Association between WELL score and patient outcome was explored in 793 patients with stage I-III LUAD in four cohorts. WELL score was an independent prognostic factor for overall survival and disease-free survival in the discovery cohort and validation cohorts. The prognostic prediction model-integrated WELL score demonstrated better discrimination performance than the clinicopathologic model in the four cohorts. This artificial intelligence-based workflow and scoring system could promote risk stratification for patients with resectable LUAD.
Collapse
Affiliation(s)
- Xipeng Pan
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Cardiovascular Institute, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China,School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
| | - Huan Lin
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China,School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Chu Han
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Zhengyun Feng
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
| | - Yumeng Wang
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
| | - Jiatai Lin
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Bingjiang Qiu
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Cardiovascular Institute, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Lixu Yan
- Department of Pathology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Bingbing Li
- Department of Pathology, Guangdong Provincial People’s Hospital Ganzhou Hospital (Ganzhou Municipal Hospital), 49 Dagong Road, Ganzhou 341000, China
| | - Zeyan Xu
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China,School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Zhizhen Wang
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
| | - Ke Zhao
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Cardiovascular Institute, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Zhenbing Liu
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
| | - Changhong Liang
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Xin Chen
- Department of Radiology, Guangzhou First People’s Hospital, School of Medicine, South China University of Technology, Guangzhou 510180, China,Corresponding author
| | - Zhenhui Li
- Guangdong Cardiovascular Institute, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China,Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Yunnan Cancer Center, Kunming 650118, China,Corresponding author
| | - Yanfen Cui
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Cardiovascular Institute, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China,Department of Radiology, Shanxi Province Cancer Hospital, Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan 030013, China,Corresponding author
| | - Cheng Lu
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China,Corresponding author
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, 106 Zhongshan Er Road, Guangzhou 510080, China,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China,Corresponding author
| |
Collapse
|
12
|
Fast and scalable search of whole-slide images via self-supervised deep learning. Nat Biomed Eng 2022; 6:1420-1434. [PMID: 36217022 PMCID: PMC9792371 DOI: 10.1038/s41551-022-00929-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 07/15/2022] [Indexed: 01/14/2023]
Abstract
The adoption of digital pathology has enabled the curation of large repositories of gigapixel whole-slide images (WSIs). Computationally identifying WSIs with similar morphologic features within large repositories without requiring supervised training can have significant applications. However, the retrieval speeds of algorithms for searching similar WSIs often scale with the repository size, which limits their clinical and research potential. Here we show that self-supervised deep learning can be leveraged to search for and retrieve WSIs at speeds that are independent of repository size. The algorithm, which we named SISH (for self-supervised image search for histology) and provide as an open-source package, requires only slide-level annotations for training, encodes WSIs into meaningful discrete latent representations and leverages a tree data structure for fast searching followed by an uncertainty-based ranking algorithm for WSI retrieval. We evaluated SISH on multiple tasks (including retrieval tasks based on tissue-patch queries) and on datasets spanning over 22,000 patient cases and 56 disease subtypes. SISH can also be used to aid the diagnosis of rare cancer types for which the number of available WSIs is often insufficient to train supervised deep-learning models.
Collapse
|