1
|
Zhang S, Yuan Z, Zhou X, Wang H, Chen B, Wang Y. VENet: Variational energy network for gland segmentation of pathological images and early gastric cancer diagnosis of whole slide images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108178. [PMID: 38652995 DOI: 10.1016/j.cmpb.2024.108178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 04/08/2024] [Accepted: 04/13/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND AND OBJECTIVE Gland segmentation of pathological images is an essential but challenging step for adenocarcinoma diagnosis. Although deep learning methods have recently made tremendous progress in gland segmentation, they have not given satisfactory boundary and region segmentation results of adjacent glands. These glands usually have a large difference in glandular appearance, and the statistical distribution between the training and test sets in deep learning is inconsistent. These problems make networks not generalize well in the test dataset, bringing difficulties to gland segmentation and early cancer diagnosis. METHODS To address these problems, we propose a Variational Energy Network named VENet with a traditional variational energy Lv loss for gland segmentation of pathological images and early gastric cancer detection in whole slide images (WSIs). It effectively integrates the variational mathematical model and the data-adaptability of deep learning methods to balance boundary and region segmentation. Furthermore, it can effectively segment and classify glands in large-size WSIs with reliable nucleus width and nucleus-to-cytoplasm ratio features. RESULTS The VENet was evaluated on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset, the Colorectal Adenocarcinoma Glands (CRAG) dataset, and the self-collected Nanfang Hospital dataset. Compared with state-of-the-art methods, our method achieved excellent performance for GlaS Test A (object dice 0.9562, object F1 0.9271, object Hausdorff distance 73.13), GlaS Test B (object dice 94.95, object F1 95.60, object Hausdorff distance 59.63), and CRAG (object dice 95.08, object F1 92.94, object Hausdorff distance 28.01). For the Nanfang Hospital dataset, our method achieved a kappa of 0.78, an accuracy of 0.9, a sensitivity of 0.98, and a specificity of 0.80 on the classification task of test 69 WSIs. CONCLUSIONS The experimental results show that the proposed model accurately predicts boundaries and outperforms state-of-the-art methods. It can be applied to the early diagnosis of gastric cancer by detecting regions of high-grade gastric intraepithelial neoplasia in WSI, which can assist pathologists in analyzing large WSI and making accurate diagnostic decisions.
Collapse
Affiliation(s)
- Shuchang Zhang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Ziyang Yuan
- Academy of Military Sciences of the People's Liberation Army, Beijing, China.
| | - Xianchen Zhou
- Department of Mathematics, National University of Defense Technology, Changsha, China
| | - Hongxia Wang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Bo Chen
- Suzhou Research Center, Institute of Automation, Chinese Academy of Sciences, Suzhou, China
| | - Yadong Wang
- Department of Laboratory Pathology, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
2
|
Zhou C, Ye L, Peng H, Liu Z, Wang J, Ramírez-De-Arellano A. A Parallel Convolutional Network Based on Spiking Neural Systems. Int J Neural Syst 2024; 34:2450022. [PMID: 38487872 DOI: 10.1142/s0129065724500229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Deep convolutional neural networks have shown advanced performance in accurately segmenting images. In this paper, an SNP-like convolutional neuron structure is introduced, abstracted from the nonlinear mechanism in nonlinear spiking neural P (NSNP) systems. Then, a U-shaped convolutional neural network named SNP-like parallel-convolutional network, or SPC-Net, is constructed for segmentation tasks. The dual-convolution concatenate (DCC) and dual-convolution addition (DCA) network blocks are designed, respectively, in the encoder and decoder stages. The two blocks employ parallel convolution with different kernel sizes to improve feature representation ability and make full use of spatial detail information. Meanwhile, different feature fusion strategies are used to fuse their features to achieve feature complementarity and augmentation. Furthermore, a dual-scale pooling (DSP) module in the bottleneck is designed to improve the feature extraction capability, which can extract multi-scale contextual information and reduce information loss while extracting salient features. The SPC-Net is applied in medical image segmentation tasks and is compared with several recent segmentation methods on the GlaS and CRAG datasets. The proposed SPC-Net achieves 90.77% DICE coefficient, 83.76% IoU score and 83.93% F1 score, 86.33% ObjDice coefficient, 135.60 Obj-Hausdorff distance, respectively. The experimental results show that the proposed model can achieve good segmentation performance.
Collapse
Affiliation(s)
- Chi Zhou
- School of Computer and Software Engineering, Xihua University, Chengdu 610039, P. R. China
| | - Lulin Ye
- School of Computer and Software Engineering, Xihua University, Chengdu 610039, P. R. China
| | - Hong Peng
- School of Computer and Software Engineering, Xihua University, Chengdu 610039, P. R. China
| | - Zhicai Liu
- School of Computer and Software Engineering, Xihua University, Chengdu 610039, P. R. China
| | - Jun Wang
- School of Electrical Engineering and Electronic Information, Xihua University, Chengdu 610039, P. R. China
| | - Antonio Ramírez-De-Arellano
- Research Group of Natural Computing, Department of Computer Science and Artificial Intelligence, University of Seville, Sevilla 41012, Spain
| |
Collapse
|
3
|
Li S, Shi S, Fan Z, He X, Zhang N. Deep information-guided feature refinement network for colorectal gland segmentation. Int J Comput Assist Radiol Surg 2023; 18:2319-2328. [PMID: 36934367 DOI: 10.1007/s11548-023-02857-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 02/22/2023] [Indexed: 03/20/2023]
Abstract
PURPOSE Reliable quantification of colorectal histopathological images is based on the precise segmentation of glands but precise segmentation of glands is challenging as glandular morphology varies widely across histological grades, such as malignant glands and non-gland tissues are too similar to be identified, and tightly connected glands are even highly possibly to be incorrectly segmented as one gland. METHODS A deep information-guided feature refinement network is proposed to improve gland segmentation. Specifically, the backbone deepens the network structure to obtain effective features while maximizing the retained information, and a Multi-Scale Fusion module is proposed to increase the receptive field. In addition, to segment dense glands individually, a Multi-Scale Edge-Refined module is designed to strengthen the boundaries of glands. RESULTS The comparative experiments on the eight recently proposed deep learning methods demonstrated that our proposed network has better overall performance and is more competitive on Test B. The F1 score of Test A and Test B is 0.917 and 0.876, respectively; the object-level Dice is 0.921 and 0.884; and the object-level Hausdorff is 43.428 and 87.132, respectively. CONCLUSION The proposed colorectal gland segmentation network can effectively extract features with high representational ability and enhance edge features while retaining details to the maximum, dramatically improving the segmentation performance on malignant glands, and better segmentation results of multi-scale and closed glands can also be obtained.
Collapse
Affiliation(s)
- Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Shuling Shi
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Zhenbang Fan
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Ni Zhang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China.
| |
Collapse
|
4
|
Zhang J, Zhang Y, Jin Y, Xu J, Xu X. MDU-Net: multi-scale densely connected U-Net for biomedical image segmentation. Health Inf Sci Syst 2023; 11:13. [PMID: 36925619 PMCID: PMC10011258 DOI: 10.1007/s13755-022-00204-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 11/02/2022] [Indexed: 03/18/2023] Open
Abstract
Biomedical image segmentation plays a central role in quantitative analysis, clinical diagnosis, and medical intervention. In the light of the fully convolutional networks (FCN) and U-Net, deep convolutional networks (DNNs) have made significant contributions to biomedical image segmentation applications. In this paper, we propose three different multi-scale dense connections (MDC) for the encoder, the decoder of U-shaped architectures, and across them. Based on three dense connections, we propose a multi-scale densely connected U-Net (MDU-Net) for biomedical image segmentation. MDU-Net directly fuses the neighboring feature maps with different scales from both higher layers and lower layers to strengthen feature propagation in the current layer. Multi-scale dense connections, which contain shorter connections between layers close to the input and output, also make a much deeper U-Net possible. Besides, we introduce quantization to alleviate the potential overfitting in dense connections, and further improve the segmentation performance. We evaluate our proposed model on the MICCAI 2015 Gland Segmentation (GlaS) dataset. The three MDC improve U-Net performance by up to 1.8% on test A and 3.5% on test B in the MICCAI Gland dataset. Meanwhile, the MDU-Net with quantization obviously improves the segmentation performance of original U-Net.
Collapse
Affiliation(s)
- Jiawei Zhang
- The Department of New Networks, Peng Cheng Laboratory, Shenzhen, Guangdong China
- Department of Cardiovascular Surgery, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences),Southern Medical University, Guangzhou, Guangdong China
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, Guangdong China
- Institute for Sustainable Industries & Livable Cities, Victoria University, Melbourne, VIC Australia
| | - Yanchun Zhang
- The Department of New Networks, Peng Cheng Laboratory, Shenzhen, Guangdong China
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, Guangdong China
- Institute for Sustainable Industries & Livable Cities, Victoria University, Melbourne, VIC Australia
| | - Yuzhen Jin
- Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
| | - Jilan Xu
- Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
| | - Xiaowei Xu
- Department of Cardiovascular Surgery, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences),Southern Medical University, Guangzhou, Guangdong China
| |
Collapse
|
5
|
Sun M, Wang J, Gong Q, Huang W. Enhancing gland segmentation in colon histology images using an instance-aware diffusion model. Comput Biol Med 2023; 166:107527. [PMID: 37778210 DOI: 10.1016/j.compbiomed.2023.107527] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 08/17/2023] [Accepted: 09/19/2023] [Indexed: 10/03/2023]
Abstract
In pathological image analysis, determination of gland morphology in histology images of the colon is essential to determine the grade of colon cancer. However, manual segmentation of glands is extremely challenging and there is a need to develop automatic methods for segmenting gland instances. Recently, due to the powerful noise-to-image denoising pipeline, the diffusion model has become one of the hot spots in computer vision research and has been explored in the field of image segmentation. In this paper, we propose an instance segmentation method based on the diffusion model that can perform automatic gland instance segmentation. Firstly, we model the instance segmentation process for colon histology images as a denoising process based on a diffusion model. Secondly, to recover details lost during denoising, we use Instance Aware Filters and multi-scale Mask Branch to construct global mask instead of predicting only local masks. Thirdly, to improve the distinction between the object and the background, we apply Conditional Encoding to enhance the intermediate features with the original image encoding. To objectively validate the proposed method, we compared several state-of-the-art deep learning models on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset (165 images), the Colorectal Adenocarcinoma Glands (CRAG) dataset (213 images) and the RINGS dataset (1500 images). Our proposed method obtains significantly improved results for CRAG (Object F1 0.853 ± 0.054, Object Dice 0.906 ± 0.043), GlaS Test A (Object F1 0.941 ± 0.039, Object Dice 0.939 ± 0.060), GlaS Test B (Object F1 0.893 ± 0.073, Object Dice 0.889 ± 0.069), and RINGS dataset (Precision 0.893 ± 0.096, Dice 0.904 ± 0.091). The experimental results show that our method significantly improves the segmentation accuracy, and the experiment results demonstrate the efficacy of the method.
Collapse
Affiliation(s)
- Mengxue Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Jiale Wang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Qingtao Gong
- Ulsan Ship and Ocean College, Ludong University, Yantai, 264025, China
| | - Wenhui Huang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China.
| |
Collapse
|
6
|
Sun J, Zhang X, Li X, Liu R, Wang T. DARMF-UNet: A dual-branch attention-guided refinement network with multi-scale features fusion U-Net for gland segmentation. Comput Biol Med 2023; 163:107218. [PMID: 37393784 DOI: 10.1016/j.compbiomed.2023.107218] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 06/08/2023] [Accepted: 06/25/2023] [Indexed: 07/04/2023]
Abstract
Accurate gland segmentation is critical in determining adenocarcinoma. Automatic gland segmentation methods currently suffer from challenges such as less accurate edge segmentation, easy mis-segmentation, and incomplete segmentation. To solve these problems, this paper proposes a novel gland segmentation network Dual-branch Attention-guided Refinement and Multi-scale Features Fusion U-Net (DARMF-UNet), which fuses multi-scale features using deep supervision. At the first three layers of feature concatenation, a Coordinate Parallel Attention (CPA) is proposed to guide the network to focus on the key regions. A Dense Atrous Convolution (DAC) block is used in the fourth layer of feature concatenation to perform multi-scale features extraction and obtain global information. A hybrid loss function is adopted to calculate the loss of each segmentation result of the network to achieve deep supervision and improve the accuracy of segmentation. Finally, the segmentation results at different scales in each part of the network are fused to obtain the final gland segmentation result. The experimental results on the gland datasets Warwick-QU and Crag show that the network improves in terms of the evaluation metrics of F1 Score, Object Dice, Object Hausdorff, and the segmentation effect is better than the state-of-the-art network models.
Collapse
Affiliation(s)
- Junmei Sun
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| | - Xin Zhang
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| | - Xiumei Li
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China.
| | - Ruyu Liu
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| | - Tianyang Wang
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| |
Collapse
|
7
|
Tan Y, Zhao SX, Yang KF, Li YJ. A lightweight network guided with differential matched filtering for retinal vessel segmentation. Comput Biol Med 2023; 160:106924. [PMID: 37146492 DOI: 10.1016/j.compbiomed.2023.106924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 04/03/2023] [Accepted: 04/13/2023] [Indexed: 05/07/2023]
Abstract
The geometric morphology of retinal vessels reflects the state of cardiovascular health, and fundus images are important reference materials for ophthalmologists. Great progress has been made in automated vessel segmentation, but few studies have focused on thin vessel breakage and false-positives in areas with lesions or low contrast. In this work, we propose a new network, differential matched filtering guided attention UNet (DMF-AU), to address these issues, incorporating a differential matched filtering layer, feature anisotropic attention, and a multiscale consistency constrained backbone to perform thin vessel segmentation. The differential matched filtering is used for the early identification of locally linear vessels, and the resulting rough vessel map guides the backbone to learn vascular details. Feature anisotropic attention reinforces the vessel features of spatial linearity at each stage of the model. Multiscale constraints reduce the loss of vessel information while pooling within large receptive fields. In tests on multiple classical datasets, the proposed model performed well compared with other algorithms on several specially designed criteria for vessel segmentation. DMF-AU is a high-performance, lightweight vessel segmentation model. The source code is at https://github.com/tyb311/DMF-AU.
Collapse
Affiliation(s)
- Yubo Tan
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Shi-Xuan Zhao
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Kai-Fu Yang
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Yong-Jie Li
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| |
Collapse
|
8
|
Diao Z, Jiang H, Zhou Y. Leverage prior texture information in deep learning-based liver tumor segmentation: A plug-and-play Texture-Based Auto Pseudo Label module. Comput Med Imaging Graph 2023; 106:102217. [PMID: 36958076 DOI: 10.1016/j.compmedimag.2023.102217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 01/16/2023] [Accepted: 03/04/2023] [Indexed: 03/17/2023]
Abstract
Segmenting the liver and tumor regions using CT scans is crucial for the subsequent treatment in clinical practice and radiotherapy. Recently, liver and tumor segmentation techniques based on U-Net have gained popularity. However, there are numerous varieties of liver tumors, and they differ greatly in terms of their shapes and textures. It is unreasonable to regard all liver tumors as one class for learning. Meanwhile, texture information is crucial for the identification of liver tumors. We propose a plug-and-play Texture-based Auto Pseudo Label (TAPL) module to take use of the texture information of tumors and enable the neural network actively learn the texture differences between various tumors to increase the segmentation accuracy, especially for small tumors. The TPAL module consists of two parts, texture enhancement and texture-based pseudo label generator. To highlight the regions where the texture varies significantly, we enhance the textured areas of the CT image. Based on their texture information, tumors are automatically divided into several classes by the texture-based pseudo label generator. The multi-class tumors produced by the neural network during the prediction step are combined into a single tumor label, which is then used as the outcome of the segmentation. Experiments on clinical dataset and public dataset Lits2017 show that the proposed algorithm outperforms single liver tumor label segmentation methods and is more friendly to small tumors.
Collapse
Affiliation(s)
- Zhaoshuo Diao
- Software College, Northeastern University, Shenyang 110819, China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, China.
| | - Yang Zhou
- Software College, Northeastern University, Shenyang 110819, China
| |
Collapse
|
9
|
Dabass M, Dabass J. An Atrous Convolved Hybrid Seg-Net Model with residual and attention mechanism for gland detection and segmentation in histopathological images. Comput Biol Med 2023; 155:106690. [PMID: 36827788 DOI: 10.1016/j.compbiomed.2023.106690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 02/06/2023] [Accepted: 02/14/2023] [Indexed: 02/21/2023]
Abstract
PURPOSE A clinically compatible computerized segmentation model is presented here that aspires to supply clinical gland informative details by seizing every small and intricate variation in medical images, integrate second opinions, and reduce human errors. APPROACH It comprises of enhanced learning capability that extracts denser multi-scale gland-specific features, recover semantic gap during concatenation, and effectively handle resolution-degradation and vanishing gradient problems. It is having three proposed modules namely Atrous Convolved Residual Learning Module in the encoder as well as decoder, Residual Attention Module in the skip connection paths, and Atrous Convolved Transitional Module as the transitional and output layer. Also, pre-processing techniques like patch-sampling, stain-normalization, augmentation, etc. are employed to develop its generalization capability. To verify its robustness and invigorate network invariance against digital variability, extensive experiments are carried out employing three different public datasets i.e., GlaS (Gland Segmentation Challenge), CRAG (Colorectal Adenocarcinoma Gland) and LC-25000 (Lung Colon-25000) dataset and a private HosC (Hospital Colon) dataset. RESULTS The presented model accomplished combative gland detection outcomes having F1-score (GlaS(Test A(0.957), Test B(0.926)), CRAG(0.935), LC 25000(0.922), HosC(0.963)); and gland segmentation results having Object-Dice Index (GlaS(Test A(0.961), Test B(0.933)), CRAG(0.961), LC-25000(0.940), HosC(0.929)), and Object-Hausdorff Distance (GlaS(Test A(21.77) and Test B(69.74)), CRAG(87.63), LC-25000(95.85), HosC(83.29)). In addition, validation score (GlaS (Test A(0.945), Test B(0.937)), CRAG(0.934), LC-25000(0.911), HosC(0.928)) supplied by the proficient pathologists is integrated for the end segmentation results to corroborate the applicability and appropriateness for assistance at the clinical level applications. CONCLUSION The proposed system will assist pathologists in devising precise diagnoses by offering a referential perspective during morphology assessment of colon histopathology images.
Collapse
Affiliation(s)
- Manju Dabass
- EECE Deptt, The NorthCap University, Gurugram, India.
| | - Jyoti Dabass
- DBT Centre of Excellence Biopharmaceutical Technology, IIT, Delhi, India
| |
Collapse
|
10
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
11
|
Gao E, Jiang H, Zhou Z, Yang C, Chen M, Zhu W, Shi F, Chen X, Zheng J, Bian Y, Xiang D. Automatic multi-tissue segmentation in pancreatic pathological images with selected multi-scale attention network. Comput Biol Med 2022; 151:106228. [PMID: 36306579 DOI: 10.1016/j.compbiomed.2022.106228] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 10/13/2022] [Accepted: 10/16/2022] [Indexed: 12/27/2022]
Abstract
The morphology of tissues in pathological images has been used routinely by pathologists to assess the degree of malignancy of pancreatic ductal adenocarcinoma (PDAC). Automatic and accurate segmentation of tumor cells and their surrounding tissues is often a crucial step to obtain reliable morphological statistics. Nonetheless, it is still a challenge due to the great variation of appearance and morphology. In this paper, a selected multi-scale attention network (SMANet) is proposed to segment tumor cells, blood vessels, nerves, islets and ducts in pancreatic pathological images. The selected multi-scale attention module is proposed to enhance effective information, supplement useful information and suppress redundant information at different scales from the encoder and decoder. It includes selection unit (SU) module and multi-scale attention (MA) module. The selection unit module can effectively filter features. The multi-scale attention module enhances effective information through spatial attention and channel attention, and combines different level features to supplement useful information. This helps learn the information of different receptive fields to improve the segmentation of tumor cells, blood vessels and nerves. An original-feature fusion unit is also proposed to supplement the original image information to reduce the under-segmentation of small tissues such as islets and ducts. The proposed method outperforms state-of-the-arts deep learning algorithms on our PDAC pathological images and achieves competitive results on the GlaS challenge dataset. The mDice and mIoU have reached 0.769 and 0.665 in our PDAC dataset.
Collapse
Affiliation(s)
- Enting Gao
- School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China
| | - Hui Jiang
- Department of Pathology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Zhibang Zhou
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Changxing Yang
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Muyang Chen
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Weifang Zhu
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Fei Shi
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Jian Zheng
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Jiangsu 215163, China
| | - Yun Bian
- Department of Radiology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Dehui Xiang
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China.
| |
Collapse
|
12
|
Dabass M, Vashisth S, Vig R. MTU: A multi-tasking U-net with hybrid convolutional learning and attention modules for cancer classification and gland Segmentation in Colon Histopathological Images. Comput Biol Med 2022; 150:106095. [PMID: 36179516 DOI: 10.1016/j.compbiomed.2022.106095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/31/2022] [Accepted: 09/10/2022] [Indexed: 11/17/2022]
Abstract
A clinically comparable multi-tasking computerized deep U-Net-based model is demonstrated in this paper. It intends to offer clinical gland morphometric information and cancer grade classification to be provided as referential opinions for pathologists in order to abate human errors. It embraces enhanced feature learning capability that aids in extraction of potent multi-scale features; efficacious semantic gap recovery during feature concatenation; and successful interception of resolution-degradation and vanishing gradient problems while performing moderate computations. It is proposed by integrating three unique novel structural components namely Hybrid Convolutional Learning Units in the encoder and decoder, Attention Learning Units in skip connection, and Multi-Scalar Dilated Transitional Unit as the transitional layer in the traditional U-Net architecture. These units are composed of the amalgamated phenomenon of multi-level convolutional learning through conventional, atrous, residual, depth-wise, and point-wise convolutions which are further incorporated with target-specific attention learning and enlarged effectual receptive field size. Also, pre-processing techniques of patch-sampling, augmentation (color and morphological), stain-normalization, etc. are employed to burgeon its generalizability. To build network invariance towards digital variability, exhaustive experiments are conducted using three public datasets (Colorectal Adenocarcinoma Gland (CRAG), Gland Segmentation (GlaS) challenge, and Lung Colon-25000 (LC-25K) dataset)) and then its robustness is verified using an in-house private dataset of Hospital Colon (HosC). For the cancer classification, the proposed model achieved results of Accuracy (CRAG(95%), GlaS(97.5%), LC-25K(99.97%), HosC(99.45%)), Precision (CRAG(0.9678), GlaS(0.9768), LC-25K(1), HosC(1)), F1-score (CRAG(0.968), GlaS(0.977), LC 25K(0.9997), HosC(0.9965)), and Recall (CRAG(0.9677), GlaS(0.9767), LC-25K(0.9994), HosC(0.9931)). For the gland detection and segmentation, the proposed model achieved competitive results of F1-score (CRAG(0.924), GlaS(Test A(0.949), Test B(0.918)), LC-25K(0.916), HosC(0.959)); Object-Dice Index (CRAG(0.959), GlaS(Test A(0.956), Test B(0.909)), LC-25K(0.929), HosC(0.922)), and Object-Hausdorff Distance (CRAG(90.47), GlaS(Test A(23.17), Test B(71.53)), LC-25K(96.28), HosC(85.45)). In addition, the activation mappings for testing the interpretability of the classification decision-making process are reported by utilizing techniques of Local Interpretable Model-Agnostic Explanations, Occlusion Sensitivity, and Gradient-Weighted Class Activation Mappings. This is done to provide further evidence about the model's self-learning capability of the comparable patterns considered relevant by pathologists without any pre-requisite for annotations. These activation mapping visualization outcomes are evaluated by proficient pathologists, and they delivered these images with a class-path validation score of (CRAG(9.31), GlaS(9.25), LC-25K(9.05), and HosC(9.85)). Furthermore, the seg-path validation score of (GlaS (Test A(9.40), Test B(9.25)), CRAG(9.27), LC-25K(9.01), HosC(9.19)) given by multiple pathologists is included for the final segmented outcomes to substantiate the clinical relevance and suitability for facilitation at the clinical level. The proposed model will aid pathologists to formulate an accurate diagnosis by providing a referential opinion during the morphology assessment of histopathology images. It will reduce unintentional human error in cancer diagnosis and consequently will enhance patient survival rate.
Collapse
Affiliation(s)
- Manju Dabass
- EECE Deptt, The NorthCap University, Gurugram, 122017, India.
| | - Sharda Vashisth
- EECE Deptt, The NorthCap University, Gurugram, 122017, India
| | - Rekha Vig
- EECE Deptt, The NorthCap University, Gurugram, 122017, India
| |
Collapse
|
13
|
Zhang H, Liu J, Wang P, Yu Z, Liu W, Chen H. Cross-Boosted Multi-Target Domain Adaptation for Multi-Modality Histopathology Image Translation and Segmentation. IEEE J Biomed Health Inform 2022; 26:3197-3208. [PMID: 35196252 DOI: 10.1109/jbhi.2022.3153793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent digital pathology workflows mainly focus on mono-modality histopathology image analysis. However, they ignore the complementarity between Haematoxylin & Eosin (H&E) and Immunohistochemically (IHC) stained images, which can provide comprehensive gold standard for cancer diagnosis. To resolve this issue, we propose a cross-boosted multi-target domain adaptation pipeline for multi-modality histopathology images, which contains Cross-frequency Style-auxiliary Translation Network (CSTN) and Dual Cross-boosted Segmentation Network (DCSN). Firstly, CSTN achieves the one-to-many translation from fluorescence microscopy images to H&E and IHC images for providing source domain training data. To generate images with realistic color and texture, Cross-frequency Feature Transfer Module (CFTM) is developed to pertinently restructure and normalize high-frequency content and low-frequency style features from different domains. Then, DCSN fulfills multi-target domain adaptive segmentation, where a dual-branch encoder is introduced, and Bidirectional Cross-domain Boosting Module (BCBM) is designed to implement cross-modality information complementation through bidirectional inter-domain collaboration. Finally, we establish Multi-modality Thymus Histopathology (MThH) dataset, which is the largest publicly available H&E and IHC image benchmark. Experiments on MThH dataset and several public datasets show that the proposed pipeline outperforms state-of-the-art methods on both histopathology image translation and segmentation.
Collapse
|
14
|
Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Med Image Anal 2022; 80:102487. [PMID: 35671591 DOI: 10.1016/j.media.2022.102487] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 05/07/2022] [Accepted: 05/20/2022] [Indexed: 01/15/2023]
Abstract
Tissue-level semantic segmentation is a vital step in computational pathology. Fully-supervised models have already achieved outstanding performance with dense pixel-level annotations. However, drawing such labels on the giga-pixel whole slide images is extremely expensive and time-consuming. In this paper, we use only patch-level classification labels to achieve tissue semantic segmentation on histopathology images, finally reducing the annotation efforts. We propose a two-step model including a classification and a segmentation phases. In the classification phase, we propose a CAM-based model to generate pseudo masks by patch-level labels. In the segmentation phase, we achieve tissue semantic segmentation by our propose Multi-Layer Pseudo-Supervision. Several technical novelties have been proposed to reduce the information gap between pixel-level and patch-level annotations. As a part of this paper, we introduce a new weakly-supervised semantic segmentation (WSSS) dataset for lung adenocarcinoma (LUAD-HistoSeg). We conduct several experiments to evaluate our proposed model on two datasets. Our proposed model outperforms five state-of-the-art WSSS approaches. Note that we can achieve comparable quantitative and qualitative results with the fully-supervised model, with only around a 2% gap for MIoU and FwIoU. By comparing with manual labeling on a randomly sampled 100 patches dataset, patch-level labeling can greatly reduce the annotation time from hours to minutes. The source code and the released datasets are available at: https://github.com/ChuHan89/WSSS-Tissue.
Collapse
|
15
|
Wang X, Wang L, Sheng Y, Zhu C, Jiang N, Bai C, Xia M, Shao Z, Gu Z, Huang X, Zhao R, Liu Z. Automatic and accurate segmentation of peripherally inserted central catheter (PICC) from chest X-rays using multi-stage attention-guided learning. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
16
|
Wu Y, Cheng M, Huang S, Pei Z, Zuo Y, Liu J, Yang K, Zhu Q, Zhang J, Hong H, Zhang D, Huang K, Cheng L, Shao W. Recent Advances of Deep Learning for Computational Histopathology: Principles and Applications. Cancers (Basel) 2022; 14:1199. [PMID: 35267505 PMCID: PMC8909166 DOI: 10.3390/cancers14051199] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 02/16/2022] [Accepted: 02/22/2022] [Indexed: 01/10/2023] Open
Abstract
With the remarkable success of digital histopathology, we have witnessed a rapid expansion of the use of computational methods for the analysis of digital pathology and biopsy image patches. However, the unprecedented scale and heterogeneous patterns of histopathological images have presented critical computational bottlenecks requiring new computational histopathology tools. Recently, deep learning technology has been extremely successful in the field of computer vision, which has also boosted considerable interest in digital pathology applications. Deep learning and its extensions have opened several avenues to tackle many challenging histopathological image analysis problems including color normalization, image segmentation, and the diagnosis/prognosis of human cancers. In this paper, we provide a comprehensive up-to-date review of the deep learning methods for digital H&E-stained pathology image analysis. Specifically, we first describe recent literature that uses deep learning for color normalization, which is one essential research direction for H&E-stained histopathological image analysis. Followed by the discussion of color normalization, we review applications of the deep learning method for various H&E-stained image analysis tasks such as nuclei and tissue segmentation. We also summarize several key clinical studies that use deep learning for the diagnosis and prognosis of human cancers from H&E-stained histopathological images. Finally, online resources and open research problems on pathological image analysis are also provided in this review for the convenience of researchers who are interested in this exciting field.
Collapse
Affiliation(s)
- Yawen Wu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Michael Cheng
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Shuo Huang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Zongxiang Pei
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Yingli Zuo
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jianxin Liu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kai Yang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Qi Zhu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jie Zhang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Honghai Hong
- Department of Clinical Laboratory, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510006, China;
| | - Daoqiang Zhang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kun Huang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Liang Cheng
- Departments of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - Wei Shao
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| |
Collapse
|