1
|
Abhisheka B, Biswas SK, Purkayastha B. HBMD-Net: Feature Fusion Based Breast Cancer Classification with Class Imbalance Resolution. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1440-1457. [PMID: 38409609 PMCID: PMC11300733 DOI: 10.1007/s10278-024-01046-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 02/06/2024] [Accepted: 02/09/2024] [Indexed: 02/28/2024]
Abstract
Breast cancer, a widespread global disease, represents a significant threat to women's health and lives, ranking as one of the most vulnerable malignant tumors they face. Many researchers have proposed their computer-aided diagnosis systems for classifying breast cancer. The majority of these approaches primarily utilize deep learning (DL) methods, which are not entirely reliable. These approaches overlook the crucial necessity of incorporating both local and global information for precise tumor detection, despite the fact that the subtle nuances are crucial for precise breast cancer classification. In addition, there are a limited number of publicly available breast cancer datasets, and the ones that are available tend to be imbalanced in nature. Therefore, this paper presents the hybrid breast mass detection-network (HBMD-Net) to address two critical challenges: class imbalance and the need to recognize that relying solely on either global or local features falls short in achieving precise tumor classification. To overcome the problem of class imbalance, HBMD-Net incorporates the borderline synthetic minority over-sampling technique (BSMOTE). Simultaneously, it employs a feature fusion approach, combining features by utilizing ResNet50 to extract deep features that provide global information, while handcrafted features are derived using histogram orientation gradient (HOG), that provide local information. In addition, an ROI segmentation has been implemented to avoid misclassifications. This integrated strategy substantially enhances breast cancer classification performance. Moreover, the proposed method integrates the block matching and 3D (BM3D) denoising filter to effectively eliminate multiplicative noise that has enhanced the performance of the system. The evaluation of the proposed HBMD-Net encompasses two breast ultrasound (BUS) datasets, namely BUSI and UDIAT. The proposed model has demonstrated a satisfactory performance, achieving accuracies of 99.14% and 94.49% respectively.
Collapse
Affiliation(s)
- Barsha Abhisheka
- Computer Science and Engineering, NIT Silchar, Silchar, 788010, Assam, India.
| | - Saroj Kr Biswas
- Computer Science and Engineering, NIT Silchar, Silchar, 788010, Assam, India
| | | |
Collapse
|
2
|
Yu T, Yu R, Liu M, Wang X, Zhang J, Zheng Y, Lv F. Integrating intratumoral and peritumoral radiomics with deep transfer learning for DCE-MRI breast lesion differentiation: A multicenter study comparing performance with radiologists. Eur J Radiol 2024; 177:111556. [PMID: 38875748 DOI: 10.1016/j.ejrad.2024.111556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 05/29/2024] [Accepted: 06/05/2024] [Indexed: 06/16/2024]
Abstract
PURPOSE To conduct the fusion of radiomics and deep transfer learning features from the intratumoral and peritumoral areas in breast DCE-MRI images to differentiate between benign and malignant breast tumors, and to compare the diagnostic accuracy of this fusion model against the assessments made by experienced radiologists. MATERIALS AND METHODS This multi-center study conducted a retrospective analysis of DCE-MRI images from 330 women diagnosed with breast cancer, with 138 cases categorized as benign and 192 as malignant. The training and internal testing sets comprised 270 patients from center 1, while the external testing cohort consisted of 60 patients from center 2. A fusion feature set consisting of radiomics features and deep transfer learning features was constructed from both intratumoral (ITR) and peritumoral (PTR) areas. The Least absolute shrinkage and selection operator (LASSO) based support vector machine was chosen as the classifier by comparing its performance with five other machine learning models. The diagnostic performance and clinical usefulness of fusion model were verified and assessed through the area under the receiver operating characteristics (ROC) and decision curve analysis. Additionally, the performance of the fusion model was compared with the diagnostic assessments of two experienced radiologists to evaluate its relative accuracy. The study strictly adhered to CLEAR and METRICS guidelines for standardization to ensure rigorous and reproducible methods. RESULTS The findings show that the fusion model, utilizing radiomics and deep transfer learning features from the ITR and PTR, exhibited exceptional performance in classifying breast tumors, achieving AUCs of 0.950 in the internal testing set and 0.921 in the external testing set. This performance significantly surpasses that of models relying on singular regional radiomics or deep transfer learning features alone. Moreover, the fusion model demonstrated superior diagnostic accuracy compared to the evaluations conducted by two experienced radiologists, thereby highlighting its potential to support and enhance clinical decision-making in the differentiation of benign and malignant breast tumors. CONCLUSION The fusion model, combining multi-regional radiomics with deep transfer learning features, not only accurately differentiates between benign and malignant breast tumors but also outperforms the diagnostic assessments made by experienced radiologists. This underscores the model's potential as a valuable tool for improving the accuracy and reliability of breast tumor diagnosis.
Collapse
Affiliation(s)
- Tao Yu
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing 400016, China; State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing 400016, China
| | - Renqiang Yu
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing 400016, China
| | - Mengqi Liu
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing 400016, China
| | - Xingyu Wang
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing 400016, China
| | - Jichuan Zhang
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing 400016, China; State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing 400016, China
| | - Yineng Zheng
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing 400016, China; State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing 400016, China; Medical Data Science Academy, Chongqing Medical University, Chongqing 400016, China.
| | - Fajin Lv
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing 400016, China; State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing 400016, China; Medical Data Science Academy, Chongqing Medical University, Chongqing 400016, China.
| |
Collapse
|
3
|
He Q, Yang Q, Su H, Wang Y. Multi-task learning for segmentation and classification of breast tumors from ultrasound images. Comput Biol Med 2024; 173:108319. [PMID: 38513394 DOI: 10.1016/j.compbiomed.2024.108319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 03/03/2024] [Accepted: 03/12/2024] [Indexed: 03/23/2024]
Abstract
Segmentation and classification of breast tumors are critical components of breast ultrasound (BUS) computer-aided diagnosis (CAD), which significantly improves the diagnostic accuracy of breast cancer. However, the characteristics of tumor regions in BUS images, such as non-uniform intensity distributions, ambiguous or missing boundaries, and varying tumor shapes and sizes, pose significant challenges to automated segmentation and classification solutions. Many previous studies have proposed multi-task learning methods to jointly tackle tumor segmentation and classification by sharing the features extracted by the encoder. Unfortunately, this often introduces redundant or misleading information, which hinders effective feature exploitation and adversely affects performance. To address this issue, we present ACSNet, a novel multi-task learning network designed to optimize tumor segmentation and classification in BUS images. The segmentation network incorporates a novel gate unit to allow optimal transfer of valuable contextual information from the encoder to the decoder. In addition, we develop the Deformable Spatial Attention Module (DSAModule) to improve segmentation accuracy by overcoming the limitations of conventional convolution in dealing with morphological variations of tumors. In the classification branch, multi-scale feature extraction and channel attention mechanisms are integrated to discriminate between benign and malignant breast tumors. Experiments on two publicly available BUS datasets demonstrate that ACSNet not only outperforms mainstream multi-task learning methods for both breast tumor segmentation and classification tasks, but also achieves state-of-the-art results for BUS tumor segmentation. Code and models are available at https://github.com/qqhe-frank/BUS-segmentation-and-classification.git.
Collapse
Affiliation(s)
- Qiqi He
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China; School of Life Science and Technology, Xidian University, Xi'an, China
| | - Qiuju Yang
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China.
| | - Hang Su
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China
| | - Yixuan Wang
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China
| |
Collapse
|
4
|
Diao Z, Jiang H. A multi-instance tumor subtype classification method for small PET datasets using RA-DL attention module guided deep feature extraction with radiomics features. Comput Biol Med 2024; 174:108461. [PMID: 38626509 DOI: 10.1016/j.compbiomed.2024.108461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 03/21/2024] [Accepted: 04/07/2024] [Indexed: 04/18/2024]
Abstract
BACKGROUND Positron emission tomography (PET) is extensively employed for diagnosing and staging various tumors, including liver cancer, lung cancer, and lymphoma. Accurate subtype classification of tumors plays a crucial role in formulating effective treatment plans for patients. Notably, lymphoma comprises subtypes like diffuse large B-cell lymphoma and Hodgkin's lymphoma, while lung cancer encompasses adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. Similarly, liver cancer consists of subtypes such as cholangiocarcinoma and hepatocellular carcinoma. Consequently, the subtype classification of tumors based on PET images holds immense clinical significance. However, in clinical practice, the number of cases available for each subtype is often limited and imbalanced. Therefore, the primary challenge lies in achieving precise subtype classification using a small dataset. METHOD This paper presents a novel approach for tumor subtype classification in small datasets using RA-DL (Radiomics-DeepLearning) attention. To address the limited sample size, Support Vector Machines (SVM) is employed as the classifier for tumor subtypes instead of deep learning methods. Emphasizing the importance of texture information in tumor subtype recognition, radiomics features are extracted from the tumor regions during the feature extraction stage. These features are compressed using an autoencoder to reduce redundancy. In addition to radiomics features, deep features are also extracted from the tumors to leverage the feature extraction capabilities of deep learning. In contrast to existing methods, our proposed approach utilizes the RA-DL-Attention mechanism to guide the deep network in extracting complementary deep features that enhance the expressive capacity of the final features while minimizing redundancy. To address the challenges of limited and imbalanced data, our method avoids using classification labels during deep feature extraction and instead incorporates 2D Region of Interest (ROI) segmentation and image reconstruction as auxiliary tasks. Subsequently, all lesion features of a single patient are aggregated into a feature vector using a multi-instance aggregation layer. RESULT Validation experiments were conducted on three PET datasets, specifically the liver cancer dataset, lung cancer dataset, and lymphoma dataset. In the context of lung cancer, our proposed method achieved impressive performance with Area Under Curve (AUC) values of 0.82, 0.84, and 0.83 for the three-classification task. For the binary classification task of lymphoma, our method demonstrated notable results with AUC values of 0.95 and 0.75. Moreover, in the binary classification task of liver tumor, our method exhibited promising performance with AUC values of 0.84 and 0.86. CONCLUSION The experimental results clearly indicate that our proposed method outperforms alternative approaches significantly. Through the extraction of complementary radiomics features and deep features, our method achieves a substantial improvement in tumor subtype classification performance using small PET datasets.
Collapse
Affiliation(s)
- Zhaoshuo Diao
- Software College, Northeastern University, Shenyang 110819, China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, China.
| |
Collapse
|
5
|
Li L, Zhou X, Cui W, Li Y, Liu T, Yuan G, Peng Y, Zheng J. Combining radiomics and deep learning features of intra-tumoral and peri-tumoral regions for the classification of breast cancer lung metastasis and primary lung cancer with low-dose CT. J Cancer Res Clin Oncol 2023; 149:15469-15478. [PMID: 37642722 DOI: 10.1007/s00432-023-05329-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Accepted: 08/21/2023] [Indexed: 08/31/2023]
Abstract
PURPOSE To investigate the performance of deep learning and radiomics features of intra-tumoral region (ITR) and peri-tumoral region (PTR) in the diagnosing of breast cancer lung metastasis (BCLM) and primary lung cancer (PLC) with low-dose CT (LDCT). METHODS We retrospectively collected the LDCT images of 100 breast cancer patients with lung lesions, comprising 60 cases of BCLM and 40 cases of PLC. We proposed a fusion model that combined deep learning features extracted from ResNet18-based multi-input residual convolution network with traditional radiomics features. Specifically, the fusion model adopted a multi-region strategy, incorporating the aforementioned features from both the ITR and PTR. Then, we randomly divided the dataset into training and validation sets using fivefold cross-validation approach. Comprehensive comparative experiments were performed between the proposed fusion model and other eight models, including the intra-tumoral deep learning model, the intra-tumoral radiomics model, the intra-tumoral deep-learning radiomics model, the peri-tumoral deep learning model, the peri-tumoral radiomics model, the peri-tumoral deep-learning radiomics model, the multi-region radiomics model, and the multi-region deep-learning model. RESULTS The fusion model developed using deep-learning radiomics feature sets extracted from the ITR and PTR had the best classification performance, with the area under the curve of 0.913 (95% CI 0.840-0.960). This was significantly higher than that of the single region's radiomics model or deep learning model. CONCLUSIONS The combination of radiomics and deep learning features was effective in discriminating BCLM and PLC. Additionally, the analysis of the PTR can mine more comprehensive tumor information.
Collapse
Affiliation(s)
- Lei Li
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Xinglu Zhou
- Department of PET/CT Center, Harbin Medical University Cancer Hospital, Harbin, 150081, China
- Department of Radiology, Second Affiliated Hospital of Harbin Medical University, Harbin, 150086, China
| | - Wenju Cui
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Yingci Li
- Department of PET/CT Center, Harbin Medical University Cancer Hospital, Harbin, 150081, China
| | - Tianyi Liu
- Department of Pathology, Second Affiliated Hospital of Harbin Medical University, Harbin, 150086, China
| | - Gang Yuan
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China.
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Yunsong Peng
- Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou, 550002, China.
| | - Jian Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| |
Collapse
|
6
|
Wang Y, Xu Z, Tang L, Zhang Q, Chen M. The Clinical Application of Artificial Intelligence Assisted Contrast-Enhanced Ultrasound on BI-RADS Category 4 Breast Lesions. Acad Radiol 2023; 30 Suppl 2:S104-S113. [PMID: 37095048 DOI: 10.1016/j.acra.2023.03.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 03/01/2023] [Accepted: 03/01/2023] [Indexed: 04/26/2023]
Abstract
RATIONALE AND OBJECTIVES To propose a novel deep learning method incorporating multiple regions based on contrast-enhanced ultrasound and grayscale ultrasound, evaluate its performance in reducing false positives for Breast Imaging Reporting and Data System (BI-RADS) category 4 lesions, and compare its diagnostic performance with that of ultrasound experts. MATERIALS AND METHODS This study enrolled 163 breast lesions in 161 women from November 2018 to March 2021. Contrast-enhanced ultrasound and conventional ultrasound were performed before surgery or biopsy. A novel deep learning model incorporating multiple regions based on contrast-enhanced ultrasound and grayscale ultrasound was proposed for minimizing the number of false-positive biopsies. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy were compared between the deep learning model and ultrasound experts. RESULTS The AUC, sensitivity, specificity, and accuracy of the deep learning model in BI-RADS category 4 lesions were 0.910, 91.5%, 90.5%, and 90.8%, respectively, compared with those of ultrasound experts were 0.869, 89.4%, 84.5%, and 85.9%, respectively. CONCLUSION The novel deep learning model we proposed had a diagnostic accuracy comparable to that of ultrasound experts, showing the potential to be clinically useful in minimizing the number of false-positive biopsies.
Collapse
Affiliation(s)
- Yuqun Wang
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200336, China
| | - Zhou Xu
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Lei Tang
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200336, China
| | - Qi Zhang
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Man Chen
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200336, China.
| |
Collapse
|
7
|
Song M, Kim Y. Unsupervised learning method via triple reconstruction for the classification of ultrasound breast lesions. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
8
|
Qu X, Lu H, Tang W, Wang S, Zheng D, Hou Y, Jiang J. A VGG attention vision transformer network for benign and malignant classification of breast ultrasound images. Med Phys 2022; 49:5787-5798. [PMID: 35866492 DOI: 10.1002/mp.15852] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 06/27/2022] [Accepted: 06/27/2022] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Breast cancer is the most commonly occurring cancer worldwide. The ultrasound reflectivity imaging technique can be used to obtain breast ultrasound (BUS) images, which can be used to classify benign and malignant tumors. However, the classification is subjective and dependent on the experience and skill of operators and doctors. The automatic classification method can assist doctors and improve the objectivity, but current convolution neural network (CNN) is not good at learning global features and vision transform (ViT) is not good at extraction local features. In this study, we proposed an VGG attention vision transformer (VGGA-ViT) network to overcome their disadvantages. METHODS In the proposed method, we used a convolutional neural network (CNN) module to extract the local features and employed a vision transformer (ViT) module to learn the global relationship between different regions and enhance the relevant local features. The CNN module was named the VGG attention (VGGA) module. It was composed of a visual geometry group (VGG) backbone, a feature extraction fully connected layer, and a squeeze-and-excitation (SE) block. Both the VGG backbone and the ViT module were pre-trained on the ImageNet dataset and re-trained using BUS samples in this study. Two BUS datasets were employed for validation. RESULTS Cross-validation was conducted on two BUS datasets. CONCLUSIONS In this study, we proposed the VGGA-ViT for the BUS classification, which was good at learning both local and global features. The proposed network achieved higher accuracy than the compared previous methods. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, 100191, China
| | - Hongyan Lu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, 100191, China
| | - Wenzhong Tang
- School of computer Science and Engineering, Beihang University, Beijing, 100191, China
| | - Shuai Wang
- Research Institute for Frontier Science, Beihang University, Beijing, 100191, China
| | - Dezhi Zheng
- Research Institute for Frontier Science, Beihang University, Beijing, 100191, China
| | - Yaxin Hou
- Department of Diagnostic Ultrasound, Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
9
|
Liu Q, Wang J, Zuo M, Cao W, Zheng J, Zhao H, Xie J. NCRNet: Neighborhood Context Refinement Network for skin lesion segmentation. Comput Biol Med 2022; 146:105545. [PMID: 35477048 DOI: 10.1016/j.compbiomed.2022.105545] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 04/11/2022] [Accepted: 04/17/2022] [Indexed: 12/24/2022]
Abstract
Accurate skin lesion segmentation plays a fundamental role in computer-aided melanoma analysis. Recently, some FCN-based methods have been proposed and achieved promising results in lesion segmentation tasks. However, due to the variable shapes, different scales, noise interference, and ambiguous boundaries of skin lesions, the capabilities of lesion location and boundary delineation of these works are still insufficient. To overcome the above challenges, in this paper, we propose a novel Neighborhood Context Refinement Network (NCRNet) by using the coarse-to-fine strategy to achieve accurate skin lesion segmentation. The proposed NCRNet contains a shared encoder and two different but closely related decoders for locating the skin lesions and refining the lesion boundaries. Specifically, we first design the Parallel Attention Decoder (PAD), which can effectively extract and fuse the local detail information and global semantic information on multiple levels to locate skin lesions of different sizes and shapes. Then, based on the initial lesion location, we further design the Neighborhood Context Refinement Decoder (NCRD), aiming at leveraging the fine-grained multi-stage neighborhood context cues to refine the lesion boundaries continuously. Furthermore, the neighborhood-based deep supervision used in the NCRD can make the shared encoder pay more attention to the lesion boundary areas and promote convergence of the segmentation network. The public skin lesion segmentation dataset ISIC2017 is adopted to validate the effectiveness of the proposed NCRNet. Comprehensive experiments prove that the proposed NCRNet reaches the state-of-the-art performance than the other nine competitive methods and gets 78.62%, 86.55%, and 94.01% on Jaccard, Dice, and Accuracy, respectively.
Collapse
Affiliation(s)
- Qi Liu
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Jingkun Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Mengying Zuo
- Department of Cardiology, Children's Hospital of Soochow University, Suzhou, 215003, China
| | - Weiwei Cao
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Jian Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China; Jinan Guoke Medical Technology Development Co., Ltd, Jinan, 250101, China
| | - Hui Zhao
- The Wenzhou Third Clinical Institute Affiliated to Wenzhou Medical University, (Wenzhou People's Hospital), Wenzhou, 325000, China
| | - Jing Xie
- The Wenzhou Third Clinical Institute Affiliated to Wenzhou Medical University, (Wenzhou People's Hospital), Wenzhou, 325000, China.
| |
Collapse
|