1
|
Jiang Q, Yu Y, Ren Y, Li S, He X. A review of deep learning methods for gastrointestinal diseases classification applied in computer-aided diagnosis system. Med Biol Eng Comput 2025; 63:293-320. [PMID: 39343842 DOI: 10.1007/s11517-024-03203-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 09/12/2024] [Indexed: 10/01/2024]
Abstract
Recent advancements in deep learning have significantly improved the intelligent classification of gastrointestinal (GI) diseases, particularly in aiding clinical diagnosis. This paper seeks to review a computer-aided diagnosis (CAD) system for GI diseases, aligning with the actual clinical diagnostic process. It offers a comprehensive survey of deep learning (DL) techniques tailored for classifying GI diseases, addressing challenges inherent in complex scenes, clinical constraints, and technical obstacles encountered in GI imaging. Firstly, the esophagus, stomach, small intestine, and large intestine were located to determine the organs where the lesions were located. Secondly, location detection and classification of a single disease are performed on the premise that the organ's location corresponding to the image is known. Finally, comprehensive classification for multiple diseases is carried out. The results of single and multi-classification are compared to achieve more accurate classification outcomes, and a more effective computer-aided diagnosis system for gastrointestinal diseases was further constructed.
Collapse
Affiliation(s)
- Qianru Jiang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Yulin Yu
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Yipei Ren
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China.
| |
Collapse
|
2
|
Oukdach Y, Garbaz A, Kerkaou Z, El Ansari M, Koutti L, El Ouafdi AF, Salihoun M. UViT-Seg: An Efficient ViT and U-Net-Based Framework for Accurate Colorectal Polyp Segmentation in Colonoscopy and WCE Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2354-2374. [PMID: 38671336 PMCID: PMC11522253 DOI: 10.1007/s10278-024-01124-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/01/2024] [Accepted: 04/13/2024] [Indexed: 04/28/2024]
Abstract
Colorectal cancer (CRC) stands out as one of the most prevalent global cancers. The accurate localization of colorectal polyps in endoscopy images is pivotal for timely detection and removal, contributing significantly to CRC prevention. The manual analysis of images generated by gastrointestinal screening technologies poses a tedious task for doctors. Therefore, computer vision-assisted cancer detection could serve as an efficient tool for polyp segmentation. Numerous efforts have been dedicated to automating polyp localization, with the majority of studies relying on convolutional neural networks (CNNs) to learn features from polyp images. Despite their success in polyp segmentation tasks, CNNs exhibit significant limitations in precisely determining polyp location and shape due to their sole reliance on learning local features from images. While gastrointestinal images manifest significant variation in their features, encompassing both high- and low-level ones, a framework that combines the ability to learn both features of polyps is desired. This paper introduces UViT-Seg, a framework designed for polyp segmentation in gastrointestinal images. Operating on an encoder-decoder architecture, UViT-Seg employs two distinct feature extraction methods. A vision transformer in the encoder section captures long-range semantic information, while a CNN module, integrating squeeze-excitation and dual attention mechanisms, captures low-level features, focusing on critical image regions. Experimental evaluations conducted on five public datasets, including CVC clinic, ColonDB, Kvasir-SEG, ETIS LaribDB, and Kvasir Capsule-SEG, demonstrate UViT-Seg's effectiveness in polyp localization. To confirm its generalization performance, the model is tested on datasets not used in training. Benchmarking against common segmentation methods and state-of-the-art polyp segmentation approaches, the proposed model yields promising results. For instance, it achieves a mean Dice coefficient of 0.915 and a mean intersection over union of 0.902 on the CVC Colon dataset. Furthermore, UViT-Seg has the advantage of being efficient, requiring fewer computational resources for both training and testing. This feature positions it as an optimal choice for real-world deployment scenarios.
Collapse
Affiliation(s)
- Yassine Oukdach
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco.
| | - Anass Garbaz
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco
| | - Zakaria Kerkaou
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco
| | - Mohamed El Ansari
- Informatics and Applications Laboratory, Department of Computer Sciences, Faculty of Science, Moulay Ismail University, B.P 11201, Meknès, 52000, Morocco
| | - Lahcen Koutti
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco
| | - Ahmed Fouad El Ouafdi
- LabSIV, Department of Computer Science, Faculty of Sciences, Ibnou Zohr University, Agadir, 80000, Morocco
| | - Mouna Salihoun
- Faculty of Medicine and Pharmacy of Rabat, Mohammed V University of Rabat, Rabat, 10000, Morocco
| |
Collapse
|
3
|
Sriraman H, Badarudeen S, Vats S, Balasubramanian P. A Systematic Review of Real-Time Deep Learning Methods for Image-Based Cancer Diagnostics. J Multidiscip Healthc 2024; 17:4411-4425. [PMID: 39281299 PMCID: PMC11397255 DOI: 10.2147/jmdh.s446745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Accepted: 07/17/2024] [Indexed: 09/18/2024] Open
Abstract
Deep Learning (DL) drives academics to create models for cancer diagnosis using medical image processing because of its innate ability to recognize difficult-to-detect patterns in complex, noisy, and massive data. The use of deep learning algorithms for real-time cancer diagnosis is explored in depth in this work. Real-time medical diagnosis determines the illness or condition that accounts for a patient's symptoms and outward physical manifestations within a predetermined time frame. With a waiting period of anywhere between 5 days and 30 days, there are currently several ways, including screening tests, biopsies, and other prospective methods, that can assist in discovering a problem, particularly cancer. This article conducts a thorough literature review to understand how DL affects the length of this waiting period. In addition, the accuracy and turnaround time of different imaging modalities is evaluated with DL-based cancer diagnosis. Convolutional neural networks are critical for real-time cancer diagnosis, with models achieving up to 99.3% accuracy. The effectiveness and cost of the infrastructure required for real-time image-based medical diagnostics are evaluated. According to the report, generalization problems, data variability, and explainable DL are some of the most significant barriers to using DL in clinical trials. Making DL applicable for cancer diagnosis will be made possible by explainable DL.
Collapse
Affiliation(s)
- Harini Sriraman
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, 600127, India
| | - Saleena Badarudeen
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, 600127, India
| | - Saransh Vats
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, 600127, India
| | - Prakash Balasubramanian
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, 600127, India
| |
Collapse
|
4
|
Wan L, Chen Z, Xiao Y, Zhao J, Feng W, Fu H. Iterative feedback-based models for image and video polyp segmentation. Comput Biol Med 2024; 177:108569. [PMID: 38781640 DOI: 10.1016/j.compbiomed.2024.108569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 03/27/2024] [Accepted: 05/05/2024] [Indexed: 05/25/2024]
Abstract
Accurate segmentation of polyps in colonoscopy images has gained significant attention in recent years, given its crucial role in automated colorectal cancer diagnosis. Many existing deep learning-based methods follow a one-stage processing pipeline, often involving feature fusion across different levels or utilizing boundary-related attention mechanisms. Drawing on the success of applying Iterative Feedback Units (IFU) in image polyp segmentation, this paper proposes FlowICBNet by extending the IFU to the domain of video polyp segmentation. By harnessing the unique capabilities of IFU to propagate and refine past segmentation results, our method proves effective in mitigating challenges linked to the inherent limitations of endoscopic imaging, notably the presence of frequent camera shake and frame defocusing. Furthermore, in FlowICBNet, we introduce two pivotal modules: Reference Frame Selection (RFS) and Flow Guided Warping (FGW). These modules play a crucial role in filtering and selecting the most suitable historical reference frames for the task at hand. The experimental results on a large video polyp segmentation dataset demonstrate that our method can significantly outperform state-of-the-art methods by notable margins achieving an average metrics improvement of 7.5% on SUN-SEG-Easy and 7.4% on SUN-SEG-Hard. Our code is available at https://github.com/eraserNut/ICBNet.
Collapse
Affiliation(s)
- Liang Wan
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| | - Zhihao Chen
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| | - Yefan Xiao
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| | - Junting Zhao
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| | - Wei Feng
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| | - Huazhu Fu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore, 138632, Republic of Singapore.
| |
Collapse
|
5
|
Xu C, Fan K, Mo W, Cao X, Jiao K. Dual ensemble system for polyp segmentation with submodels adaptive selection ensemble. Sci Rep 2024; 14:6152. [PMID: 38485963 PMCID: PMC10940608 DOI: 10.1038/s41598-024-56264-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 03/04/2024] [Indexed: 03/18/2024] Open
Abstract
Colonoscopy is one of the main methods to detect colon polyps, and its detection is widely used to prevent and diagnose colon cancer. With the rapid development of computer vision, deep learning-based semantic segmentation methods for colon polyps have been widely researched. However, the accuracy and stability of some methods in colon polyp segmentation tasks show potential for further improvement. In addition, the issue of selecting appropriate sub-models in ensemble learning for the colon polyp segmentation task still needs to be explored. In order to solve the above problems, we first implement the utilization of multi-complementary high-level semantic features through the Multi-Head Control Ensemble. Then, to solve the sub-model selection problem in training, we propose SDBH-PSO Ensemble for sub-model selection and optimization of ensemble weights for different datasets. The experiments were conducted on the public datasets CVC-ClinicDB, Kvasir, CVC-ColonDB, ETIS-LaribPolypDB and PolypGen. The results show that the DET-Former, constructed based on the Multi-Head Control Ensemble and the SDBH-PSO Ensemble, consistently provides improved accuracy across different datasets. Among them, the Multi-Head Control Ensemble demonstrated superior feature fusion capability in the experiments, and the SDBH-PSO Ensemble demonstrated excellent sub-model selection capability. The sub-model selection capabilities of the SDBH-PSO Ensemble will continue to have significant reference value and practical utility as deep learning networks evolve.
Collapse
Affiliation(s)
- Cun Xu
- Guilin University of Electronic Technology, Guilin, 541000, China
| | - Kefeng Fan
- China Electronics Standardization Institute, Beijing, 100007, China.
| | - Wei Mo
- Guilin University of Electronic Technology, Guilin, 541000, China
| | - Xuguang Cao
- Guilin University of Electronic Technology, Guilin, 541000, China
| | - Kaijie Jiao
- Guilin University of Electronic Technology, Guilin, 541000, China
| |
Collapse
|
6
|
Oukdach Y, Kerkaou Z, El Ansari M, Koutti L, Fouad El Ouafdi A, De Lange T. ViTCA-Net: a framework for disease detection in video capsule endoscopy images using a vision transformer and convolutional neural network with a specific attention mechanism. MULTIMEDIA TOOLS AND APPLICATIONS 2024; 83:63635-63654. [DOI: 10.1007/s11042-023-18039-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/27/2023] [Accepted: 12/26/2023] [Indexed: 02/10/2025]
|
7
|
Huang Z, Xie F, Qing W, Wang M, Liu M, Sun D. MGF-net: Multi-channel group fusion enhancing boundary attention for polyp segmentation. Med Phys 2024; 51:407-418. [PMID: 37403578 DOI: 10.1002/mp.16584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 05/11/2023] [Accepted: 06/02/2023] [Indexed: 07/06/2023] Open
Abstract
BACKGROUND Colonic polyps are the most prevalent neoplastic lesions detected during colorectal cancer screening, and timely detection and excision of these precursor lesions is crucial for preventing multiple malignancies and reducing mortality rates. PURPOSE The pressing need for intelligent polyp detection has led to the development of a high-precision intelligent polyp segmentation network designed to improve polyp screening rates during colonoscopies. METHODS In this study, we employed ResNet50 as the backbone network and embedded a multi-channel grouping fusion encoding module in the third to fifth stages to extract high-level semantic features of polyps. Receptive field modules were utilized to capture multi-scale features, and grouping fusion modules were employed to capture salient features in different group channels, guiding the decoder to generate an initial global mapping with improved accuracy. To refine the segmentation of the initial global mapping, we introduced an enhanced boundary weight attention module that adaptively thresholds the initial global mapping using learnable parameters. A self-attention mechanism was then utilized to calculate the long-distance dependency relationship of the polyp boundary area, resulting in an output feature map with enhanced boundaries that effectively refines the boundary of the target area. RESULTS We carried out contrast experiments of MGF-Net with mainstream polyp segmentation networks on five public datasets of ColonDB, CVC-ColonDB, CVC-612, Kvasir, and ETIS. The results demonstrate that the segmentation accuracy of MGF-Net is significantly improved on the datasets. Furthermore, a hypothesis test was conducted to assess the statistical significance of the computed results. CONCLUSIONS Our proposed MGF-Net outperforms existing mainstream baseline networks and presents a promising solution to the pressing need for intelligent polyp detection. The proposed model is available at https://github.com/xiefanghhh/MGF-NET.
Collapse
Affiliation(s)
- Zhiyong Huang
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Fang Xie
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Wencheng Qing
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Mengyao Wang
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Man Liu
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Daming Sun
- Chongqing Engineering Research Center of Medical Electronics and Information, Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| |
Collapse
|
8
|
Oh S, Oh D, Kim D, Song W, Hwang Y, Cho N, Lim YJ. Video Analysis of Small Bowel Capsule Endoscopy Using a Transformer Network. Diagnostics (Basel) 2023; 13:3133. [PMID: 37835876 PMCID: PMC10572266 DOI: 10.3390/diagnostics13193133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/19/2023] [Accepted: 09/30/2023] [Indexed: 10/15/2023] Open
Abstract
Although wireless capsule endoscopy (WCE) detects small bowel diseases effectively, it has some limitations. For example, the reading process can be time consuming due to the numerous images generated per case and the lesion detection accuracy may rely on the operators' skills and experiences. Hence, many researchers have recently developed deep-learning-based methods to address these limitations. However, they tend to select only a portion of the images from a given WCE video and analyze each image individually. In this study, we note that more information can be extracted from the unused frames and the temporal relations of sequential frames. Specifically, to increase the accuracy of lesion detection without depending on experts' frame selection skills, we suggest using whole video frames as the input to the deep learning system. Thus, we propose a new Transformer-architecture-based neural encoder that takes the entire video as the input, exploiting the power of the Transformer architecture to extract long-term global correlation within and between the input frames. Subsequently, we can capture the temporal context of the input frames and the attentional features within a frame. Tests on benchmark datasets of four WCE videos showed 95.1% sensitivity and 83.4% specificity. These results may significantly advance automated lesion detection techniques for WCE images.
Collapse
Affiliation(s)
- SangYup Oh
- School of Electrical and Computer Engineering, Seoul National University, 1 Gwanak-ro, Kwanak-gu, Seoul 08826, Republic of Korea; (S.O.); (W.S.)
| | - DongJun Oh
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea;
| | - Dongmin Kim
- JLK TOWER, Gangnam-gu, Seoul 06141, Republic of Korea;
| | - Woohyuk Song
- School of Electrical and Computer Engineering, Seoul National University, 1 Gwanak-ro, Kwanak-gu, Seoul 08826, Republic of Korea; (S.O.); (W.S.)
| | - Youngbae Hwang
- Department of Electronics Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea;
| | - Namik Cho
- School of Electrical and Computer Engineering, Seoul National University, 1 Gwanak-ro, Kwanak-gu, Seoul 08826, Republic of Korea; (S.O.); (W.S.)
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea;
| |
Collapse
|
9
|
Shahid B, Abbas M, Ur Rehman A, Ul Abideen Z. IAPC2: Improved and Automatic Classification of Polyp for Colorectal Cancer. 2023 INTERNATIONAL CONFERENCE ON BUSINESS ANALYTICS FOR TECHNOLOGY AND SECURITY (ICBATS) 2023. [DOI: 10.1109/icbats57792.2023.10111431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Affiliation(s)
- Bisma Shahid
- Riphah International University,Department of Computer Science,Lahore,Pakistan
| | - Maria Abbas
- Riphah International University,Department of Computer Science,Lahore,Pakistan
| | - Abd Ur Rehman
- Riphah International University,Department of Computer Science,Lahore,Pakistan
| | | |
Collapse
|
10
|
Leveraging Marine Predators Algorithm with Deep Learning for Lung and Colon Cancer Diagnosis. Cancers (Basel) 2023; 15:cancers15051591. [PMID: 36900381 PMCID: PMC10001330 DOI: 10.3390/cancers15051591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 02/04/2023] [Accepted: 02/08/2023] [Indexed: 03/08/2023] Open
Abstract
Cancer is a deadly disease caused by various biochemical abnormalities and genetic diseases. Colon and lung cancer have developed as two major causes of disability and death in human beings. The histopathological detection of these malignancies is a vital element in determining the optimal solution. Timely and initial diagnosis of the sickness on either front diminishes the possibility of death. Deep learning (DL) and machine learning (ML) methods are used to hasten such cancer recognition, allowing the research community to examine more patients in a much shorter period and at a less cost. This study introduces a marine predator's algorithm with deep learning as a lung and colon cancer classification (MPADL-LC3) technique. The presented MPADL-LC3 technique aims to properly discriminate different types of lung and colon cancer on histopathological images. To accomplish this, the MPADL-LC3 technique employs CLAHE-based contrast enhancement as a pre-processing step. In addition, the MPADL-LC3 technique applies MobileNet to derive feature vector generation. Meanwhile, the MPADL-LC3 technique employs MPA as a hyperparameter optimizer. Furthermore, deep belief networks (DBN) can be applied for lung and color classification. The simulation values of the MPADL-LC3 technique were examined on benchmark datasets. The comparison study highlighted the enhanced outcomes of the MPADL-LC3 system in terms of different measures.
Collapse
|
11
|
Sun H, Liu J, Wang Q. Magnetic Actuation Systems and Magnetic Robots for Gastrointestinal Examination and Treatment. CHINESE JOURNAL OF ELECTRICAL ENGINEERING 2023; 9:3-28. [DOI: 10.23919/cjee.2023.000009] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
Affiliation(s)
- Hongbo Sun
- Institute of Electrical Engineering, Chinese Academy of Sciences,Beijing,China,100190
| | - Jianhua Liu
- Institute of Electrical Engineering, Chinese Academy of Sciences,Beijing,China,100190
| | - Qiuliang Wang
- Institute of Electrical Engineering, Chinese Academy of Sciences,Beijing,China,100190
| |
Collapse
|
12
|
Ragab M, Katib I, Sharaf SA, Assiri FY, Hamed D, Al-Ghamdi AAM. Self-Upgraded Cat Mouse Optimizer With Machine Learning Driven Lung Cancer Classification on Computed Tomography Imaging. IEEE ACCESS 2023; 11:107972-107981. [DOI: 10.1109/access.2023.3313508] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2024]
Affiliation(s)
- Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Iyad Katib
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Sanaa Abdullah Sharaf
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Fatmah Yousef Assiri
- Software Engineering Department, College of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia
| | - Diaa Hamed
- Faculty of Earth Sciences, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Abdullah Al-Malaise Al-Ghamdi
- Information Systems Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
13
|
Tharwat M, Sakr NA, El-Sappagh S, Soliman H, Kwak KS, Elmogy M. Colon Cancer Diagnosis Based on Machine Learning and Deep Learning: Modalities and Analysis Techniques. SENSORS (BASEL, SWITZERLAND) 2022; 22:9250. [PMID: 36501951 PMCID: PMC9739266 DOI: 10.3390/s22239250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
The treatment and diagnosis of colon cancer are considered to be social and economic challenges due to the high mortality rates. Every year, around the world, almost half a million people contract cancer, including colon cancer. Determining the grade of colon cancer mainly depends on analyzing the gland's structure by tissue region, which has led to the existence of various tests for screening that can be utilized to investigate polyp images and colorectal cancer. This article presents a comprehensive survey on the diagnosis of colon cancer. This covers many aspects related to colon cancer, such as its symptoms and grades as well as the available imaging modalities (particularly, histopathology images used for analysis) in addition to common diagnosis systems. Furthermore, the most widely used datasets and performance evaluation metrics are discussed. We provide a comprehensive review of the current studies on colon cancer, classified into deep-learning (DL) and machine-learning (ML) techniques, and we identify their main strengths and limitations. These techniques provide extensive support for identifying the early stages of cancer that lead to early treatment of the disease and produce a lower mortality rate compared with the rate produced after symptoms develop. In addition, these methods can help to prevent colorectal cancer from progressing through the removal of pre-malignant polyps, which can be achieved using screening tests to make the disease easier to diagnose. Finally, the existing challenges and future research directions that open the way for future work in this field are presented.
Collapse
Affiliation(s)
- Mai Tharwat
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Nehal A. Sakr
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Shaker El-Sappagh
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13512, Egypt
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| | - Hassan Soliman
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Kyung-Sup Kwak
- Department of Information and Communication Engineering, Inha University, Incheon 22212, Republic of Korea
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| |
Collapse
|
14
|
Hu J, Xu Y, Tang Z. DAN-PD: Domain adaptive network with parallel decoder for polyp segmentation. Comput Med Imaging Graph 2022; 101:102124. [PMID: 36182740 DOI: 10.1016/j.compmedimag.2022.102124] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 08/14/2022] [Accepted: 09/14/2022] [Indexed: 01/27/2023]
Abstract
Endoscopy is essential for polyp diagnosis and prevention of colorectal cancer. Many deep learning methods have been proposed to perform automatic semantic segmentation of polyps in endoscopic images. However, labeled training images are always scarce, and the styles of endoscopic images from different medical centers vary greatly. The annotation of medical images requires much effort, and how to make more efficient utilization of the existing labeled data is becoming an increasingly critical issue. Considering the characteristics of polyp segmentation tasks and the need for generalization, we proposed a novel method named DAN-PD based on the Vision Transformer. Moreover, we devised the Teacher Parallel Encoder (TPE) and the Domain-Aware Parallel Decoder (DAPD) for the model. Our design innovatively introduces Unsupervised Domain Adaptation (UDA) methods and adversarial learning strategies to the polyp segmentation task. We conducted four transfer learning experiments with three public polyp image datasets to examine the model's performance. The results shows that our proposed method is ahead of other methods in all experiments and reaches the state-of-the-art level.
Collapse
Affiliation(s)
- Jiaqi Hu
- University of Shanghai for Science and Technology, No. 516, Jungong Rd., Shanghai, 200093, Shanghai, China
| | - Yongqin Xu
- University of Shanghai for Science and Technology, No. 516, Jungong Rd., Shanghai, 200093, Shanghai, China
| | - Zhixian Tang
- College of Medical Imaging, Shanghai University of Medicine and Health Sciences, No. 279, Zhouzhu Rd., Shanghai, 201318, Shanghai, China.
| |
Collapse
|
15
|
Yin TK, Huang KL, Chiu SR, Yang YQ, Chang BR. Endoscopy Artefact Detection by Deep Transfer Learning of Baseline Models. J Digit Imaging 2022; 35:1101-1110. [PMID: 35478060 PMCID: PMC9582060 DOI: 10.1007/s10278-022-00627-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 03/28/2022] [Accepted: 03/30/2022] [Indexed: 10/18/2022] Open
Abstract
To visualise the tumours inside the body on a screen, a long and thin tube is inserted with a light source and a camera at the tip to obtain video frames inside organs in endoscopy. However, multiple artefacts exist in these video frames that cause difficulty during the diagnosis of cancers. In this research, deep learning was applied to detect eight kinds of artefacts: specularity, bubbles, saturation, contrast, blood, instrument, blur, and imaging artefacts. Based on transfer learning with pre-trained parameters and fine-tuning, two state-of-the-art methods were applied for detection: faster region-based convolutional neural networks (Faster R-CNN) and EfficientDet. Experiments were implemented on the grand challenge dataset, Endoscopy Artefact Detection and Segmentation (EAD2020). To validate our approach in this study, we used phase I of 2,200 frames and phase II of 331 frames in the original training dataset with ground-truth annotations as training and testing dataset, respectively. Among the tested methods, EfficientDet-D2 achieves a score of 0.2008 (mAPd[Formula: see text]0.6+mIoUd[Formula: see text]0.4) on the dataset that is better than three other baselines: Faster-RCNN, YOLOv3, and RetinaNet, and competitive to the best non-baseline result scored 0.25123 on the leaderboard although our testing was on phase II of 331 frames instead of the original 200 testing frames. Without extra improvement techniques beyond basic neural networks such as test-time augmentation, we showed that a simple baseline could achieve state-of-the-art performance in detecting artefacts in endoscopy. In conclusion, we proposed the combination of EfficientDet-D2 with suitable data augmentation and pre-trained parameters during fine-tuning training to detect the artefacts in endoscopy.
Collapse
Affiliation(s)
- Tang-Kai Yin
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan.
| | - Kai-Lun Huang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Si-Rong Chiu
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Yu-Qi Yang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Bao-Rong Chang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| |
Collapse
|
16
|
Yue G, Han W, Li S, Zhou T, Lv J, Wang T. Automated polyp segmentation in colonoscopy images via deep network with lesion-aware feature selection and refinement. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103846] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
17
|
Han J, Xu C, An Z, Qian K, Tan W, Wang D, Fang Q. PRAPNet: A Parallel Residual Atrous Pyramid Network for Polyp Segmentation. SENSORS (BASEL, SWITZERLAND) 2022; 22:4658. [PMID: 35808154 PMCID: PMC9268928 DOI: 10.3390/s22134658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/15/2022] [Accepted: 06/15/2022] [Indexed: 02/05/2023]
Abstract
In a colonoscopy, accurate computer-aided polyp detection and segmentation can help endoscopists to remove abnormal tissue. This reduces the chance of polyps developing into cancer, which is of great importance. In this paper, we propose a neural network (parallel residual atrous pyramid network or PRAPNet) based on a parallel residual atrous pyramid module for the segmentation of intestinal polyp detection. We made full use of the global contextual information of the different regions by the proposed parallel residual atrous pyramid module. The experimental results showed that our proposed global prior module could effectively achieve better segmentation results in the intestinal polyp segmentation task compared with the previously published results. The mean intersection over union and dice coefficient of the model in the Kvasir-SEG dataset were 90.4% and 94.2%, respectively. The experimental results outperformed the scores achieved by the seven classical segmentation network models (U-Net, U-Net++, ResUNet++, praNet, CaraNet, SFFormer-L, TransFuse-L).
Collapse
Affiliation(s)
- Jubao Han
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| | - Chao Xu
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| | - Ziheng An
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| | - Kai Qian
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| | - Wei Tan
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| | - Dou Wang
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| | - Qianqian Fang
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| |
Collapse
|
18
|
Yue G, Han W, Jiang B, Zhou T, Cong R, Wang T. Boundary Constraint Network with Cross Layer Feature Integration for Polyp Segmentation. IEEE J Biomed Health Inform 2022; 26:4090-4099. [PMID: 35536816 DOI: 10.1109/jbhi.2022.3173948] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Clinically, proper polyp localization in endoscopy images plays a vital role in the follow-up treatment (e.g., surgical planning). Deep convolutional neural networks (CNNs) provide a favoured prospect for automatic polyp segmentation and evade the limitations of visual inspection, e.g., subjectivity and overwork. However, most existing CNNs-based methods often provide unsatisfactory segmentation performance. In this paper, we propose a novel boundary constraint network, namely BCNet, for accurate polyp segmentation. The success of BCNet benefits from integrating cross-level context information and leveraging edge information. Specifically, to avoid the drawbacks caused by simple feature addition or concentration, BCNet applies a cross-layer feature integration strategy (CFIS) in fusing the features of the top-three highest layers, yielding a better performance. CFIS consists of three attention-driven cross-layer feature interaction modules (ACFIMs) and two global feature integration modules (GFIMs). ACFIM adaptively fuses the context information of the top-three highest layers via the self-attention mechanism instead of direct addition or concentration. GFIM integrates the fused information across layers with the guidance from global attention. To obtain accurate boundaries, BCNet introduces a bilateral boundary extraction module that explores the polyp and non-polyp information of the shallow layer collaboratively based on the high-level location information and boundary supervision. Through joint supervision of the polyp area and boundary, BCNet is able to get more accurate polyp masks. Experimental results on three public datasets show that the proposed BCNet outperforms seven state-of-the-art competing methods in terms of both effectiveness and generalization.
Collapse
|
19
|
Guo X, Chen Z, Liu J, Yuan Y. Non-equivalent images and pixels: confidence-aware resampling with meta-learning mixup for polyp segmentation. Med Image Anal 2022; 78:102394. [DOI: 10.1016/j.media.2022.102394] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 12/07/2021] [Accepted: 02/11/2022] [Indexed: 01/27/2023]
|
20
|
Guo X, Liu J, Yuan Y. Semantic-Oriented Labeled-to-Unlabeled Distribution Translation for Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:434-445. [PMID: 34543194 DOI: 10.1109/tmi.2021.3114329] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Automatic medical image segmentation plays a crucial role in many medical applications, such as disease diagnosis and treatment planning. Existing deep learning based models usually regarded the segmentation task as pixel-wise classification and neglected the semantic correlations of pixels across different images, leading to vague feature distribution. Moreover, pixel-wise annotated data is rare in medical domain, and the scarce annotated data usually exhibits the biased distribution against the desired one, hindering the performance improvement under the supervised learning setting. In this paper, we propose a novel Labeled-to-unlabeled Distribution Translation (L2uDT) framework with Semantic-oriented Contrastive Learning (SoCL), mainly for addressing the aforementioned issues in medical image segmentation. In SoCL, a semantic grouping module is designed to cluster pixels into a set of semantically coherent groups, and a semantic-oriented contrastive loss is advanced to constrain group-wise prototypes, so as to explicitly learn a feature space with intra-class compactness and inter-class separability. We then establish a L2uDT strategy to approximate the desired data distribution for unbiased optimization, where we translate the labeled data distribution with the guidance of extensive unlabeled data. In particular, a bias estimator is devised to measure the distribution bias, then a gradual-paced shift is derived to progressively translate the labeled data distribution to unlabeled one. Both labeled and translated data are leveraged to optimize the segmentation model simultaneously. We illustrate the effectiveness of the proposed method on two benchmark datasets, EndoScene and PROSTATEx, and our method achieves state-of-the-art performance, which clearly demonstrates its effectiveness for medical image segmentation. The source code is available at https://github.com/CityU-AIM-Group/L2uDT.
Collapse
|
21
|
Mi J, Han X, Wang R, Ma R, Zhao D. Diagnostic Accuracy of Wireless Capsule Endoscopy in Polyp Recognition Using Deep Learning: A Meta-Analysis. Int J Clin Pract 2022; 2022:9338139. [PMID: 35685533 PMCID: PMC9159236 DOI: 10.1155/2022/9338139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 02/18/2022] [Accepted: 02/25/2022] [Indexed: 12/24/2022] Open
Abstract
AIM As the completed studies have small sample sizes and different algorithms, a meta-analysis was conducted to assess the accuracy of WCE in identifying polyps using deep learning. METHOD Two independent reviewers searched PubMed, Embase, the Web of Science, and the Cochrane Library for potentially eligible studies published up to December 8, 2021, which were analysed on a per-image basis. STATA RevMan and Meta-DiSc were used to conduct this meta-analysis. A random effects model was used, and a subgroup and regression analysis was performed to explore sources of heterogeneity. RESULTS Eight studies published between 2017 and 2021 included 819 patients, and 18,414 frames were eventually included in the meta-analysis. The summary estimates for the WCE in identifying polyps by deep learning were sensitivity 0.97 (95% confidence interval (CI), 0.95-0.98); specificity 0.97 (95% CI, 0.94-0.98); positive likelihood ratio 27.19 (95% CI, 15.32-50.42); negative likelihood ratio 0.03 (95% CI 0.02-0.05); diagnostic odds ratio 873.69 (95% CI, 387.34-1970.74); and the area under the sROC curve 0.99. CONCLUSION WCE uses deep learning to identify polyps with high accuracy, but multicentre prospective randomized controlled studies are needed in the future.
Collapse
Affiliation(s)
- Junjie Mi
- Digestive Endoscopy Center, Shanxi Provincial People's Hospital, Taiyuan, China
| | - Xiaofang Han
- Reproductive Medicine, Shanxi Provincial People's Hospital, Taiyuan, China
| | - Rong Wang
- Digestive Endoscopy Center, Shanxi Provincial People's Hospital, Taiyuan, China
| | - Ruijun Ma
- Digestive Endoscopy Center, Shanxi Provincial People's Hospital, Taiyuan, China
| | - Danyu Zhao
- Digestive Endoscopy Center, Shanxi Provincial People's Hospital, Taiyuan, China
| |
Collapse
|
22
|
Guo X, Yang C, Yuan Y. Dynamic-weighting hierarchical segmentation network for medical images. Med Image Anal 2021; 73:102196. [PMID: 34365142 DOI: 10.1016/j.media.2021.102196] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 04/19/2021] [Accepted: 06/08/2021] [Indexed: 01/09/2023]
Abstract
Automatic medical image segmentation plays a crucial role in many medical image analysis applications, such as disease diagnosis and prognosis. Despite the extensive progress of existing deep learning based models for medical image segmentation, they focus on extracting accurate features by designing novel network structures and solely utilize fully connected (FC) layer for pixel-level classification. Considering the insufficient capability of FC layer to encode the extracted diverse feature representations, we propose a Hierarchical Segmentation (HieraSeg) Network for medical image segmentation and devise a Hierarchical Fully Connected (HFC) layer. Specifically, it consists of three classifiers and decouples each category into several subcategories by introducing multiple weight vectors to denote the diverse characteristics in each category. A subcategory-level and a category-level learning schemes are then designed to explicitly enforce the discrepant subcategories and automatically capture the most representative characteristics. Hence, the HFC layer can fit the variant characteristics so as to derive an accurate decision boundary. To enhance the robustness of HieraSeg Network with the variability of lesions, we further propose a Dynamic-Weighting HieraSeg (DW-HieraSeg) Network, which introduces an Image-level Weight Net (IWN) and a Pixel-level Weight Net (PWN) to learn data-driven curriculum. Through progressively incorporating informative images and pixels in an easy-to-hard manner, DW-HieraSeg Network is able to eliminate local optimums and accelerate the training process. Additionally, a class balanced loss is proposed to constrain the PWN for preventing the overfitting problem in minority category. Comprehensive experiments on three benchmark datasets, EndoScene, ISIC and Decathlon, show our newly proposed HieraSeg and DW-HieraSeg Networks achieve state-of-the-art performance, which clearly demonstrates the effectiveness of the proposed approaches for medical image segmentation.
Collapse
Affiliation(s)
- Xiaoqing Guo
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Chen Yang
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Yixuan Yuan
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
23
|
Lan L, Ye C. Recurrent generative adversarial networks for unsupervised WCE video summarization. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.106971] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
24
|
Multi-layer segmentation framework for cell nuclei using improved GVF Snake model, Watershed, and ellipse fitting. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102516] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
25
|
Guo X, Yang C, Liu Y, Yuan Y. Learn to Threshold: ThresholdNet With Confidence-Guided Manifold Mixup for Polyp Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1134-1146. [PMID: 33360986 DOI: 10.1109/tmi.2020.3046843] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The automatic segmentation of polyp in endoscopy images is crucial for early diagnosis and cure of colorectal cancer. Existing deep learning-based methods for polyp segmentation, however, are inadequate due to the limited annotated dataset and the class imbalance problems. Moreover, these methods obtained the final polyp segmentation results by simply thresholding the likelihood maps at an eclectic and equivalent value (often set to 0.5). In this paper, we propose a novel ThresholdNet with a confidence-guided manifold mixup (CGMMix) data augmentation method, mainly for addressing the aforementioned issues in polyp segmentation. The CGMMix conducts manifold mixup at the image and feature levels, and adaptively lures the decision boundary away from the under-represented polyp class with the confidence guidance to alleviate the limited training dataset and the class imbalance problems. Two consistency regularizations, mixup feature map consistency (MFMC) loss and mixup confidence map consistency (MCMC) loss, are devised to exploit the consistent constraints in the training of the augmented mixup data. We then propose a two-branch approach, termed ThresholdNet, to collaborate the segmentation and threshold learning in an alternative training strategy. The threshold map supervision generator (TMSG) is embedded to provide supervision for the threshold map, thereby inducing better optimization of the threshold branch. As a consequence, ThresholdNet is able to calibrate the segmentation result with the learned threshold map. We illustrate the effectiveness of the proposed method on two polyp segmentation datasets, and our methods achieved the state-of-the-art result with 87.307% and 87.879% dice score on the EndoScene dataset and the WCE polyp dataset. The source code is available at https://github.com/Guo-Xiaoqing/ThresholdNet.
Collapse
|
26
|
|