1
|
Sinonquel P, Eelbode T, Pech O, De Wulf D, Dewint P, Neumann H, Antonelli G, Iacopini F, Tate D, Lemmers A, Pilonis ND, Kaminski MF, Roelandt P, Hassan C, Ingrid D, Maes F, Bisschops R. Clinical consequences of computer-aided colorectal polyp detection. Gut 2024:gutjnl-2024-331943. [PMID: 38876773 DOI: 10.1136/gutjnl-2024-331943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 06/02/2024] [Indexed: 06/16/2024]
Abstract
BACKGROUND AND AIM Randomised trials show improved polyp detection with computer-aided detection (CADe), mostly of small lesions. However, operator and selection bias may affect CADe's true benefit. Clinical outcomes of increased detection have not yet been fully elucidated. METHODS In this multicentre trial, CADe combining convolutional and recurrent neural networks was used for polyp detection. Blinded endoscopists were monitored in real time by a second observer with CADe access. CADe detections prompted reinspection. Adenoma detection rates (ADR) and polyp detection rates were measured prestudy and poststudy. Histological assessments were done by independent histopathologists. The primary outcome compared polyp detection between endoscopists and CADe. RESULTS In 946 patients (51.9% male, mean age 64), a total of 2141 polyps were identified, including 989 adenomas. CADe was not superior to human polyp detection (sensitivity 94.6% vs 96.0%) but outperformed them when restricted to adenomas. Unblinding led to an additional yield of 86 true positive polyp detections (1.1% ADR increase per patient; 73.8% were <5 mm). CADe also increased non-neoplastic polyp detection by an absolute value of 4.9% of the cases (1.8% increase of entire polyp load). Procedure time increased with 6.6±6.5 min (+42.6%). In 22/946 patients, the additional detection of adenomas changed surveillance intervals (2.3%), mostly by increasing the number of small adenomas beyond the cut-off. CONCLUSION Even if CADe appears to be slightly more sensitive than human endoscopists, the additional gain in ADR was minimal and follow-up intervals rarely changed. Additional inspection of non-neoplastic lesions was increased, adding to the inspection and/or polypectomy workload.
Collapse
Affiliation(s)
- Pieter Sinonquel
- Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven Biomedical Sciences Group, Leuven, Belgium
| | - Tom Eelbode
- Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Oliver Pech
- Gastroenterology and Hepatology, Krankenhaus Barmherzige Bruder Regensburg, Regensburg, Germany
| | - Dominiek De Wulf
- Gastroenterology and Hepatology, AZ Delta vzw, Roeselare, Belgium
| | - Pieter Dewint
- Gastroenterology and Hepatology, AZ Maria Middelares vzw, Gent, Belgium
| | - Helmut Neumann
- Gastroenterology and Hepatology, Gastrozentrum Lippe, Bad Salzuflen, Germany
| | - Giulio Antonelli
- Gastroenterology and Digestive Endoscopy Unit, Ospedale Nuovo Regina Margherita, Roma, Italy
| | - Federico Iacopini
- Gastroenterology and Digestive endoscopy, Ospedale dei Castelli, Ariccia, Italy
| | - David Tate
- Gastroenterology and Hepatology, UZ Gent, Gent, Belgium
| | - Arnaud Lemmers
- Gastroenterology and Hepatology, ULB Erasme, Bruxelles, Belgium
| | | | - Michal Filip Kaminski
- Department of Gastroenterology, Hepatology and Oncology, Medical Centre fo Postgraduate Education, Warsaw, Poland
- Department of Gastroenterological Oncology, The Maria Sklodowska-Curie Memorial Cancer Centre, Instytute of Oncology, Warsaw, Poland
| | - Philip Roelandt
- Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven Biomedical Sciences Group, Leuven, Belgium
| | - Cesare Hassan
- Endoscopy, Humanitas University Hospitals, Humanitas Group, Rozzano, Italy
| | - Demedts Ingrid
- Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven Biomedical Sciences Group, Leuven, Belgium
| | - Frederik Maes
- Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Raf Bisschops
- Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven Biomedical Sciences Group, Leuven, Belgium
| |
Collapse
|
2
|
Wang Z, Liu Z, Yu J, Gao Y, Liu M. Multi-scale nested UNet with transformer for colorectal polyp segmentation. J Appl Clin Med Phys 2024; 25:e14351. [PMID: 38551396 PMCID: PMC11163511 DOI: 10.1002/acm2.14351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 02/13/2024] [Accepted: 02/19/2024] [Indexed: 06/11/2024] Open
Abstract
BACKGROUND Polyp detection and localization are essential tasks for colonoscopy. U-shape network based convolutional neural networks have achieved remarkable segmentation performance for biomedical images, but lack of long-range dependencies modeling limits their receptive fields. PURPOSE Our goal was to develop and test a novel architecture for polyp segmentation, which takes advantage of learning local information with long-range dependencies modeling. METHODS A novel architecture combining with multi-scale nested UNet structure integrated transformer for polyp segmentation was developed. The proposed network takes advantage of both CNN and transformer to extract distinct feature information. The transformer layer is embedded between the encoder and decoder of a U-shape net to learn explicit global context and long-range semantic information. To address the challenging of variant polyp sizes, a MSFF unit was proposed to fuse features with multiple resolution. RESULTS Four public datasets and one in-house dataset were used to train and test the model performance. Ablation study was also conducted to verify each component of the model. For dataset Kvasir-SEG and CVC-ClinicDB, the proposed model achieved mean dice score of 0.942 and 0.950 respectively, which were more accurate than the other methods. To show the generalization of different methods, we processed two cross dataset validations, the proposed model achieved the highest mean dice score. The results demonstrate that the proposed network has powerful learning and generalization capability, significantly improving segmentation accuracy and outperforming state-of-the-art methods. CONCLUSIONS The proposed model produced more accurate polyp segmentation than current methods on four different public and one in-house datasets. Its capability of polyps segmentation in different sizes shows the potential clinical application.
Collapse
Affiliation(s)
- Zenan Wang
- Department of Gastroenterology, Beijing Chaoyang Hospitalthe Third Clinical Medical College of Capital Medical UniversityBeijingChina
| | - Zhen Liu
- Department of Gastroenterology, Beijing Chaoyang Hospitalthe Third Clinical Medical College of Capital Medical UniversityBeijingChina
| | - Jianfeng Yu
- Department of Gastroenterology, Beijing Chaoyang Hospitalthe Third Clinical Medical College of Capital Medical UniversityBeijingChina
| | - Yingxin Gao
- Department of Gastroenterology, Beijing Chaoyang Hospitalthe Third Clinical Medical College of Capital Medical UniversityBeijingChina
| | - Ming Liu
- Hunan Key Laboratory of Nonferrous Resources and Geological Hazard ExplorationChangshaChina
| |
Collapse
|
3
|
Alam MJ, Fattah SA. SR-AttNet: An Interpretable Stretch-Relax Attention based Deep Neural Network for Polyp Segmentation in Colonoscopy Images. Comput Biol Med 2023; 160:106945. [PMID: 37163966 DOI: 10.1016/j.compbiomed.2023.106945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 04/07/2023] [Accepted: 04/14/2023] [Indexed: 05/12/2023]
Abstract
BACKGROUND Colorectal polyp is a common structural gastrointestinal (GI) anomaly, which can in certain cases turn malignant. Colonoscopic image inspection is, thereby, an important step for isolating the polyps as well as removing them if necessary. However, the process is around 30-60 min long and inspecting each image for polyps can prove to be a tedious task. Hence, an automatic computerized process for efficient and accurate polyp isolation can be a useful tool. METHODS In this study, a deep learning network is introduced for colorectal polyp segmentation. The network is based on an encoder-decoder architecture, however, having both un-dilated and dilated filtering in order to extract both near and far local information as well as perceive image depth. Four-fold skip-connections exist between each spatial encoder-decoder due to both type of filtering and a 'Feature-to-Mask' pipeline processes the decoded dilated and un-dilated features for final prediction. The proposed network implements a 'Stretch-Relax' based attention system, SR-Attention, to generate high variance spatial features in order to obtain useful attention masks for cognitive feature selection. From this 'Stretch-Relax' attention based operation, the network is termed as 'SR-AttNet'. RESULTS Training and optimization is performed on four different datasets, and inference has been done on five (Kvasir-SEG, CVC-ClinicDB, CVC-Colon, ETIS-Larib, EndoCV2020); all of which output higher Dice-score compared to state-of-the-art and existing networks. The efficacy and interpretability of SR-Attention is also demonstrated based on quantitative variance. CONCLUSION In consequence, the proposed SR-AttNet can be considered for an automated and general approach for polyp segmentation during colonoscopy.
Collapse
Affiliation(s)
- Md Jahin Alam
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka 1205, Bangladesh.
| | - Shaikh Anowarul Fattah
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka 1205, Bangladesh.
| |
Collapse
|
4
|
Colon cancer stage detection in colonoscopy images using YOLOv3 MSF deep learning architecture. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
5
|
Artificial Intelligence in Gastroenterology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-58080-3_163-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
6
|
Strümke I, Hicks SA, Thambawita V, Jha D, Parasa S, Riegler MA, Halvorsen P. Artificial Intelligence in Gastroenterology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
7
|
Medical Image Classification Based on Information Interaction Perception Mechanism. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:8429899. [PMID: 34912447 PMCID: PMC8668365 DOI: 10.1155/2021/8429899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 11/12/2021] [Indexed: 12/18/2022]
Abstract
Colorectal cancer originates from adenomatous polyps. Adenomatous polyps start out as benign, but over time they can become malignant and even lead to complications and death which will spread to adherent and surrounding organs over time, such as lymph nodes, liver, or lungs, eventually leading to complications and death. Factors such as operator's experience shortage and visual fatigue will directly affect the diagnostic accuracy of colonoscopy. To relieve the pressure on medical imaging personnel, this paper proposed a network model for colonic polyp detection using colonoscopy images. Considering the unnoticeable surface texture of colonic polyps, this paper designed a channel information interaction perception (CIIP) module. Based on this module, an information interaction perception network (IIP-Net) is proposed. In order to improve the accuracy of classification and reduce the cost of calculation, the network used three classifiers for classification: fully connected (FC) structure, global average pooling fully connected (GAP-FC) structure, and convolution global average pooling (C-GAP) structure. We evaluated the performance of IIP-Net by randomly selecting colonoscopy images from a gastroscopy database. The experimental results showed that the overall accuracy of IIP-NET54-GAP-FC module is 99.59%, and the accuracy of colonic polyp is 99.40%. By contrast, our IIP-NET54-GAP-FC performed extremely well.
Collapse
|
8
|
Deep Ensembles Based on Stochastic Activations for Semantic Segmentation. SIGNALS 2021. [DOI: 10.3390/signals2040047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Semantic segmentation is a very popular topic in modern computer vision, and it has applications in many fields. Researchers have proposed a variety of architectures for semantic image segmentation. The most common ones exploit an encoder–decoder structure that aims to capture the semantics of the image and its low-level features. The encoder uses convolutional layers, in general with a stride larger than one, to extract the features, while the decoder recreates the image by upsampling and using skip connections with the first layers. The objective of this study is to propose a method for creating an ensemble of CNNs by enhancing diversity among networks with different activation functions. In this work, we use DeepLabV3+ as an architecture to test the effectiveness of creating an ensemble of networks by randomly changing the activation functions inside the network multiple times. We also use different backbone networks in our DeepLabV3+ to validate our findings. A comprehensive evaluation of the proposed approach is conducted across two different image segmentation problems: the first is from the medical field, i.e., polyp segmentation for early detection of colorectal cancer, and the second is skin detection for several different applications, including face detection, hand gesture recognition, and many others. As to the first problem, we manage to reach a Dice coefficient of 0.888, and a mean intersection over union (mIoU) of 0.825, in the competitive Kvasir-SEG dataset. The high performance of the proposed ensemble is confirmed in skin detection, where the proposed approach is ranked first concerning other state-of-the-art approaches (including HarDNet) in a large set of testing datasets.
Collapse
|
9
|
|
10
|
A-DenseUNet: Adaptive Densely Connected UNet for Polyp Segmentation in Colonoscopy Images with Atrous Convolution. SENSORS 2021; 21:s21041441. [PMID: 33669539 PMCID: PMC7922083 DOI: 10.3390/s21041441] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 02/14/2021] [Accepted: 02/17/2021] [Indexed: 01/05/2023]
Abstract
Colon carcinoma is one of the leading causes of cancer-related death in both men and women. Automatic colorectal polyp segmentation and detection in colonoscopy videos help endoscopists to identify colorectal disease more easily, making it a promising method to prevent colon cancer. In this study, we developed a fully automated pixel-wise polyp segmentation model named A-DenseUNet. The proposed architecture adapts different datasets, adjusting for the unknown depth of the network by sharing multiscale encoding information to the different levels of the decoder side. We also used multiple dilated convolutions with various atrous rates to observe a large field of view without increasing the computational cost and prevent loss of spatial information, which would cause dimensionality reduction. We utilized an attention mechanism to remove noise and inappropriate information, leading to the comprehensive re-establishment of contextual features. Our experiments demonstrated that the proposed architecture achieved significant segmentation results on public datasets. A-DenseUNet achieved a 90% Dice coefficient score on the Kvasir-SEG dataset and a 91% Dice coefficient score on the CVC-612 dataset, both of which were higher than the scores of other deep learning models such as UNet++, ResUNet, U-Net, PraNet, and ResUNet++ for segmenting polyps in colonoscopy images.
Collapse
|
11
|
Wang S, Cong Y, Zhu H, Chen X, Qu L, Fan H, Zhang Q, Liu M. Multi-Scale Context-Guided Deep Network for Automated Lesion Segmentation With Endoscopy Images of Gastrointestinal Tract. IEEE J Biomed Health Inform 2021; 25:514-525. [PMID: 32750912 DOI: 10.1109/jbhi.2020.2997760] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Accurate lesion segmentation based on endoscopy images is a fundamental task for the automated diagnosis of gastrointestinal tract (GI Tract) diseases. Previous studies usually use hand-crafted features for representing endoscopy images, while feature definition and lesion segmentation are treated as two standalone tasks. Due to the possible heterogeneity between features and segmentation models, these methods often result in sub-optimal performance. Several fully convolutional networks have been recently developed to jointly perform feature learning and model training for GI Tract disease diagnosis. However, they generally ignore local spatial details of endoscopy images, as down-sampling operations (e.g., pooling and convolutional striding) may result in irreversible loss of image spatial information. To this end, we propose a multi-scale context-guided deep network (MCNet) for end-to-end lesion segmentation of endoscopy images in GI Tract, where both global and local contexts are captured as guidance for model training. Specifically, one global subnetwork is designed to extract the global structure and high-level semantic context of each input image. Then we further design two cascaded local subnetworks based on output feature maps of the global subnetwork, aiming to capture both local appearance information and relatively high-level semantic information in a multi-scale manner. Those feature maps learned by three subnetworks are further fused for the subsequent task of lesion segmentation. We have evaluated the proposed MCNet on 1,310 endoscopy images from the public EndoVis-Ab and CVC-ClinicDB datasets for abnormal segmentation and polyp segmentation, respectively. Experimental results demonstrate that MCNet achieves [Formula: see text] and [Formula: see text] mean intersection over union (mIoU) on two datasets, respectively, outperforming several state-of-the-art approaches in automated lesion segmentation with endoscopy images of GI Tract.
Collapse
|
12
|
Artificial Intelligence in Medicine. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_163-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
13
|
Misawa M, Kudo SE, Mori Y, Maeda Y, Ogawa Y, Ichimasa K, Kudo T, Wakamura K, Hayashi T, Miyachi H, Baba T, Ishida F, Itoh H, Oda M, Mori K. Current status and future perspective on artificial intelligence for lower endoscopy. Dig Endosc 2021; 33:273-284. [PMID: 32969051 DOI: 10.1111/den.13847] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 09/03/2020] [Accepted: 09/16/2020] [Indexed: 12/23/2022]
Abstract
The global incidence and mortality rate of colorectal cancer remains high. Colonoscopy is regarded as the gold standard examination for detecting and eradicating neoplastic lesions. However, there are some uncertainties in colonoscopy practice that are related to limitations in human performance. First, approximately one-fourth of colorectal neoplasms are missed on a single colonoscopy. Second, it is still difficult for non-experts to perform adequately regarding optical biopsy. Third, recording of some quality indicators (e.g. cecal intubation, bowel preparation, and withdrawal speed) which are related to adenoma detection rate, is sometimes incomplete. With recent improvements in machine learning techniques and advances in computer performance, artificial intelligence-assisted computer-aided diagnosis is being increasingly utilized by endoscopists. In particular, the emergence of deep-learning, data-driven machine learning techniques have made the development of computer-aided systems easier than that of conventional machine learning techniques, the former currently being considered the standard artificial intelligence engine of computer-aided diagnosis by colonoscopy. To date, computer-aided detection systems seem to have improved the rate of detection of neoplasms. Additionally, computer-aided characterization systems may have the potential to improve diagnostic accuracy in real-time clinical practice. Furthermore, some artificial intelligence-assisted systems that aim to improve the quality of colonoscopy have been reported. The implementation of computer-aided system clinical practice may provide additional benefits such as helping in educational poorly performing endoscopists and supporting real-time clinical decision-making. In this review, we have focused on computer-aided diagnosis during colonoscopy reported by gastroenterologists and discussed its status, limitations, and future prospects.
Collapse
Affiliation(s)
- Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan.,Clinical Effectiveness Research Group, Institute of Heath and Society, University of Oslo, Oslo, Norway
| | - Yasuharu Maeda
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yushi Ogawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Katsuro Ichimasa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toyoki Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Kunihiko Wakamura
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Takemasa Hayashi
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hideyuki Miyachi
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toshiyuki Baba
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Fumio Ishida
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hayato Itoh
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| |
Collapse
|
14
|
Attardo S, Chandrasekar VT, Spadaccini M, Maselli R, Patel HK, Desai M, Capogreco A, Badalamenti M, Galtieri PA, Pellegatta G, Fugazza A, Carrara S, Anderloni A, Occhipinti P, Hassan C, Sharma P, Repici A. Artificial intelligence technologies for the detection of colorectal lesions: The future is now. World J Gastroenterol 2020; 26:5606-5616. [PMID: 33088155 PMCID: PMC7545398 DOI: 10.3748/wjg.v26.i37.5606] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 06/30/2020] [Accepted: 09/15/2020] [Indexed: 02/06/2023] Open
Abstract
Several studies have shown a significant adenoma miss rate up to 35% during screening colonoscopy, especially in patients with diminutive adenomas. The use of artificial intelligence (AI) in colonoscopy has been gaining popularity by helping endoscopists in polyp detection, with the aim to increase their adenoma detection rate (ADR) and polyp detection rate (PDR) in order to reduce the incidence of interval cancers. The efficacy of deep convolutional neural network (DCNN)-based AI system for polyp detection has been trained and tested in ex vivo settings such as colonoscopy still images or videos. Recent trials have evaluated the real-time efficacy of DCNN-based systems showing promising results in term of improved ADR and PDR. In this review we reported data from the preliminary ex vivo experiences and summarized the results of the initial randomized controlled trials.
Collapse
Affiliation(s)
- Simona Attardo
- Department of Endoscopy and Digestive Disease, AOU Maggiore della Carità, Novara 28100, Italy
| | | | - Marco Spadaccini
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| | - Roberta Maselli
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Harsh K Patel
- Department of Internal Medicine, Ochsner Clinic Foundation, New Orleans, LA 70124, United States
| | - Madhav Desai
- Department of Gastroenterology and Hepatology, Kansas City VA Medical Center, Kansas City, MO 66045, United States
| | - Antonio Capogreco
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| | - Matteo Badalamenti
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | | | - Gaia Pellegatta
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Alessandro Fugazza
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Silvia Carrara
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Andrea Anderloni
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Pietro Occhipinti
- Department of Endoscopy and Digestive Disease, AOU Maggiore della Carità, Novara 28100, Italy
| | - Cesare Hassan
- Endoscopy Unit, Nuovo Regina Margherita Hospital, Roma 00153, Italy
| | - Prateek Sharma
- Department of Gastroenterology and Hepatology, Kansas City VA Medical Center, Kansas City, MO 66045, United States
| | - Alessandro Repici
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| |
Collapse
|
15
|
Wang W, Tian J, Zhang C, Luo Y, Wang X, Li J. An improved deep learning approach and its applications on colonic polyp images detection. BMC Med Imaging 2020; 20:83. [PMID: 32698839 PMCID: PMC7374886 DOI: 10.1186/s12880-020-00482-3] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Accepted: 07/08/2020] [Indexed: 12/22/2022] Open
Abstract
Background Colonic polyps are more likely to be cancerous, especially those with large diameter, large number and atypical hyperplasia. If colonic polyps cannot be treated in early stage, they are likely to develop into colon cancer. Colonoscopy is easily limited by the operator’s experience, and factors such as inexperience and visual fatigue will directly affect the accuracy of diagnosis. Cooperating with Hunan children’s hospital, we proposed and improved a deep learning approach with global average pooling (GAP) in colonoscopy for assisted diagnosis. Our approach for assisted diagnosis in colonoscopy can prompt endoscopists to pay attention to polyps that may be ignored in real time, improve the detection rate, reduce missed diagnosis, and improve the efficiency of medical diagnosis. Methods We selected colonoscopy images from the gastrointestinal endoscopy room of Hunan children’s hospital to form the colonic polyp datasets. And we applied the image classification method based on Deep Learning to the classification of Colonic Polyps. The classic networks we used are VGGNets and ResNets. By using global average pooling, we proposed the improved approaches: VGGNets-GAP and ResNets-GAP. Results The accuracies of all models in datasets exceed 98%. The TPR and TNR are above 96 and 98% respectively. In addition, VGGNets-GAP networks not only have high classification accuracies, but also have much fewer parameters than those of VGGNets. Conclusions The experimental results show that the proposed approach has good effect on the automatic detection of colonic polyps. The innovations of our method are in two aspects: (1) the detection accuracy of colonic polyps has been improved. (2) our approach reduces the memory consumption and makes the model lightweight. Compared with the original VGG networks, the parameters of our VGG19-GAP networks are greatly reduced.
Collapse
Affiliation(s)
- Wei Wang
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China.
| | - Jinge Tian
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
| | - Chengwen Zhang
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
| | - Yanhong Luo
- Hunan Children's Hospital, Changsha, 410000, China.
| | - Xin Wang
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China.
| | - Ji Li
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
| |
Collapse
|
16
|
Thambawita V, Jha D, Hammer HL, Johansen HD, Johansen D, Halvorsen P, Riegler MA. An Extensive Study on Cross-Dataset Bias and Evaluation Metrics Interpretation for Machine Learning Applied to Gastrointestinal Tract Abnormality Classification. ACTA ACUST UNITED AC 2020. [DOI: 10.1145/3386295] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Precise and efficient automated identification of gastrointestinal (GI) tract diseases can help doctors treat more patients and improve the rate of disease detection and identification. Currently, automatic analysis of diseases in the GI tract is a hot topic in both computer science and medical-related journals. Nevertheless, the evaluation of such an automatic analysis is often incomplete or simply wrong. Algorithms are often only tested on small and biased datasets, and cross-dataset evaluations are rarely performed. A clear understanding of evaluation metrics and machine learning models with cross datasets is crucial to bring research in the field to a new quality level. Toward this goal, we present comprehensive evaluations of five distinct machine learning models using global features and deep neural networks that can classify 16 different key types of GI tract conditions, including pathological findings, anatomical landmarks, polyp removal conditions, and normal findings from images captured by common GI tract examination instruments. In our evaluation, we introduce performance hexagons using six performance metrics, such as recall, precision, specificity, accuracy, F1-score, and the Matthews correlation coefficient to demonstrate how to determine the real capabilities of models rather than evaluating them shallowly. Furthermore, we perform cross-dataset evaluations using different datasets for training and testing. With these cross-dataset evaluations, we demonstrate the challenge of actually building a generalizable model that could be used across different hospitals. Our experiments clearly show that more sophisticated performance metrics and evaluation methods need to be applied to get reliable models rather than depending on evaluations of the splits of the same dataset—that is, the performance metrics should always be interpreted together rather than relying on a single metric.
Collapse
Affiliation(s)
| | - Debesh Jha
- SimulaMet and UiT—The Arctic University of Norway, Tromsø, Norway
| | | | | | - Dag Johansen
- UiT—The Arctic University of Norway, Tromsø, Norway
| | - Pål Halvorsen
- SimulaMet and Oslo Metropolitan University, Oslo, Norway
| | | |
Collapse
|
17
|
Mostafiz R, Rahman MM, Uddin MS. Gastrointestinal polyp classification through empirical mode decomposition and neural features. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-2944-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
|
18
|
Deeba F, Bui FM, Wahid KA. Computer-aided polyp detection based on image enhancement and saliency-based selection. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.04.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
19
|
Kudo SE, Mori Y, Misawa M, Takeda K, Kudo T, Itoh H, Oda M, Mori K. Artificial intelligence and colonoscopy: Current status and future perspectives. Dig Endosc 2019; 31:363-371. [PMID: 30624835 DOI: 10.1111/den.13340] [Citation(s) in RCA: 70] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 12/04/2018] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND AIM Application of artificial intelligence in medicine is now attracting substantial attention. In the field of gastrointestinal endoscopy, computer-aided diagnosis (CAD) for colonoscopy is the most investigated area, although it is still in the preclinical phase. Because colonoscopy is carried out by humans, it is inherently an imperfect procedure. CAD assistance is expected to improve its quality regarding automated polyp detection and characterization (i.e. predicting the polyp's pathology). It could help prevent endoscopists from missing polyps as well as provide a precise optical diagnosis for those detected. Ultimately, these functions that CAD provides could produce a higher adenoma detection rate and reduce the cost of polypectomy for hyperplastic polyps. METHODS AND RESULTS Currently, research on automated polyp detection has been limited to experimental assessments using an algorithm based on ex vivo videos or static images. Performance for clinical use was reported to have >90% sensitivity with acceptable specificity. In contrast, research on automated polyp characterization seems to surpass that for polyp detection. Prospective studies of in vivo use of artificial intelligence technologies have been reported by several groups, some of which showed a >90% negative predictive value for differentiating diminutive (≤5 mm) rectosigmoid adenomas, which exceeded the threshold for optical biopsy. CONCLUSION We introduce the potential of using CAD for colonoscopy and describe the most recent conditions for regulatory approval for artificial intelligence-assisted medical devices.
Collapse
Affiliation(s)
- Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Kenichi Takeda
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toyoki Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hayato Itoh
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| |
Collapse
|
20
|
Liu M, Jiang J, Wang Z. Colonic Polyp Detection in Endoscopic Videos With Single Shot Detection Based Deep Convolutional Neural Network. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2019; 7:75058-75066. [PMID: 33604228 PMCID: PMC7889061 DOI: 10.1109/access.2019.2921027] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
A major rise in the prevalence and influence of colorectal cancer (CRC) leads to substantially increasing healthcare costs and even death. It is widely accepted that early detection and removal of colonic polyps can prevent CRC. Detection of colonic polyps in colonoscopy videos is problematic because of complex environment of colon and various shapes of polyps. Currently, researchers indicate feasibility of Convolutional Neural Network (CNN)-based detection of polyps but better feature extractors are needed to improve detection performance. In this paper, we investigated the potential of the single shot detector (SSD) framework for detecting polyps in colonoscopy videos. SSD is a one-stage method, which uses a feed-forward CNN to produce a collection of fixed-size bounding boxes for each object from different feature maps. Three different feature extractors, including ResNet50, VGG16, and InceptionV3 were assessed. Multi-scale feature maps integrated into SSD were designed for ResNet50 and InceptionV3, respectively. We validated this method on the 2015 MICCAI polyp detection challenge datasets, compared it with teams attended the challenge, YOLOV3 and two-stage method, Faster-RCNN. Our results demonstrated that the proposed method surpassed all the teams in MICCAI challenge and YOLOV3 and was comparable with two-stage method. Especially in detection speed aspect, our proposed method outperformed all the methods, met real-time application requirement. Meanwhile, we also indicated that among all the feature extractors, InceptionV3 obtained the best result of precision and recall. In conclusion, SSD- based method achieved excellent detection performance in polyp detection and can potentially improve diagnostic accuracy and efficiency.
Collapse
Affiliation(s)
- Ming Liu
- Hunan Key Laboratory of Nonferrous Resources and Geological Hazard Exploration, Changsha 410083, China
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Zenan Wang
- Department of Gastroenterology, Beijing Chaoyang Hospital, the Third Clinical Medical College of Capital Medical University, Beijing 100020, China
| |
Collapse
|
21
|
Zhang R, Zheng Y, Poon CC, Shen D, Lau JY. Polyp detection during colonoscopy using a regression-based convolutional neural network with a tracker. PATTERN RECOGNITION 2018; 83:209-219. [PMID: 31105338 PMCID: PMC6519928 DOI: 10.1016/j.patcog.2018.05.026] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
A computer-aided detection (CAD) tool for locating and detecting polyps can help reduce the chance of missing polyps during colonoscopy. Nevertheless, state-of-the-art algorithms were either computationally complex or suffered from low sensitivity and therefore unsuitable to be used in real clinical setting. In this paper, a novel regression-based Convolutional Neural Network (CNN) pipeline is presented for polyp detection during colonoscopy. The proposed pipeline was constructed in two parts: 1) to learn the spatial features of colorectal polyps, a fast object detection algorithm named ResYOLO was pre-trained with a large non-medical image database and further fine-tuned with colonoscopic images extracted from videos; and 2) temporal information was incorporated via a tracker named Efficient Convolution Operators (ECO) for refining the detection results given by ResYOLO. Evaluated on 17,574 frames extracted from 18 endoscopic videos of the AsuMayoDB, the proposed method was able to detect frames with polyps with a precision of 88.6%, recall of 71.6% and processing speed of 6.5 frames per second, i.e. the method can accurately locate polyps in more frames and at a faster speed compared to existing methods. In conclusion, the proposed method has great potential to be used to assist endoscopists in tracking polyps during colonoscopy.
Collapse
Affiliation(s)
- Ruikai Zhang
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| | - Yali Zheng
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| | - Carmen C.Y. Poon
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
- Corresponding author
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Corresponding author at: Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA.
| | - James Y.W. Lau
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
22
|
Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy. Nat Biomed Eng 2018; 2:741-748. [PMID: 31015647 DOI: 10.1038/s41551-018-0301-3] [Citation(s) in RCA: 247] [Impact Index Per Article: 41.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 08/29/2018] [Indexed: 02/08/2023]
Abstract
The detection and removal of precancerous polyps via colonoscopy is the gold standard for the prevention of colon cancer. However, the detection rate of adenomatous polyps can vary significantly among endoscopists. Here, we show that a machine-learning algorithm can detect polyps in clinical colonoscopies, in real time and with high sensitivity and specificity. We developed the deep-learning algorithm by using data from 1,290 patients, and validated it on newly collected 27,113 colonoscopy images from 1,138 patients with at least one detected polyp (per-image-sensitivity, 94.38%; per-image-specificity, 95.92%; area under the receiver operating characteristic curve, 0.984), on a public database of 612 polyp-containing images (per-image-sensitivity, 88.24%), on 138 colonoscopy videos with histologically confirmed polyps (per-image-sensitivity of 91.64%; per-polyp-sensitivity, 100%), and on 54 unaltered full-range colonoscopy videos without polyps (per-image-specificity, 95.40%). By using a multi-threaded processing system, the algorithm can process at least 25 frames per second with a latency of 76.80 ± 5.60 ms in real-time video analysis. The software may aid endoscopists while performing colonoscopies, and help assess differences in polyp and adenoma detection performance among endoscopists.
Collapse
|
23
|
Sánchez-González A, García-Zapirain B, Sierra-Sosa D, Elmaghraby A. Automatized colon polyp segmentation via contour region analysis. Comput Biol Med 2018; 100:152-164. [DOI: 10.1016/j.compbiomed.2018.07.002] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 07/03/2018] [Accepted: 07/04/2018] [Indexed: 12/13/2022]
|
24
|
Shin Y, Balasingham I. Automatic polyp frame screening using patch based combined feature and dictionary learning. Comput Med Imaging Graph 2018; 69:33-42. [PMID: 30172091 DOI: 10.1016/j.compmedimag.2018.08.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 07/29/2018] [Accepted: 08/13/2018] [Indexed: 12/15/2022]
Abstract
Polyps in the colon can potentially become malignant cancer tissues where early detection and removal lead to high survival rate. Certain types of polyps can be difficult to detect even for highly trained physicians. Inspired by aforementioned problem our study aims to improve the human detection performance by developing an automatic polyp screening framework as a decision support tool. We use a small image patch based combined feature method. Features include shape and color information and are extracted using histogram of oriented gradient and hue histogram methods. Dictionary learning based training is used to learn features and final feature vector is formed using sparse coding. For classification, we use patch image classification based on linear support vector machine and whole image thresholding. The proposed framework is evaluated using three public polyp databases. Our experimental results show that the proposed scheme successfully classified polyps and normal images with over 95% of classification accuracy, sensitivity, specificity and precision. In addition, we compare performance of the proposed scheme with conventional feature based methods and the convolutional neural network (CNN) based deep learning approach which is the state of the art technique in many image classification applications.
Collapse
Affiliation(s)
- Younghak Shin
- Department Electronic Systems at Norwegian University of Science and Technology (NTNU), Trondheim, Norway.
| | - Ilangko Balasingham
- Intervention Centre, Oslo University Hospital, Oslo NO-0027, Norway; Institute of Clinical Medicine, University of Oslo, and the Norwegian University of Science and Technology (NTNU), Norway.
| |
Collapse
|
25
|
A novel summary report of colonoscopy: timeline visualization providing meaningful colonoscopy video information. Int J Colorectal Dis 2018. [PMID: 29520455 DOI: 10.1007/s00384-018-2980-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
PURPOSE The colonoscopy adenoma detection rate depends largely on physician experience and skill, and overlooked colorectal adenomas could develop into cancer. This study assessed a system that detects polyps and summarizes meaningful information from colonoscopy videos. METHODS One hundred thirteen consecutive patients had colonoscopy videos prospectively recorded at the Seoul National University Hospital. Informative video frames were extracted using a MATLAB support vector machine (SVM) model and classified as bleeding, polypectomy, tool, residue, thin wrinkle, folded wrinkle, or common. Thin wrinkle, folded wrinkle, and common frames were reanalyzed using SVM for polyp detection. The SVM model was applied hierarchically for effective classification and optimization of the SVM. RESULTS The mean classification accuracy according to type was over 93%; sensitivity was over 87%. The mean sensitivity for polyp detection was 82.1%, and the positive predicted value (PPV) was 39.3%. Polyps detected using the system were larger (6.3 ± 6.4 vs. 4.9 ± 2.5 mm; P = 0.003) with a more pedunculated morphology (Yamada type III, 10.2 vs. 0%; P < 0.001; Yamada type IV, 2.8 vs. 0%; P < 0.001) than polyps missed by the system. There were no statistically significant differences in polyp distribution or histology between the groups. Informative frames and suspected polyps were presented on a timeline. This summary was evaluated using the system usability scale questionnaire; 89.3% of participants expressed positive opinions. CONCLUSIONS We developed and verified a system to extract meaningful information from colonoscopy videos. Although further improvement and validation of the system is needed, the proposed system is useful for physicians and patients.
Collapse
|
26
|
Integrating Online and Offline Three-Dimensional Deep Learning for Automated Polyp Detection in Colonoscopy Videos. IEEE J Biomed Health Inform 2016; 21:65-75. [PMID: 28114049 DOI: 10.1109/jbhi.2016.2637004] [Citation(s) in RCA: 101] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way for colorectal cancer prevention and diagnosis. Traditional manual screening is time consuming, operator dependent, and error prone; hence, automated detection approach is highly demanded in clinical practice. However, automated polyp detection is very challenging due to high intraclass variations in polyp size, color, shape, and texture, and low interclass variations between polyps and hard mimics. In this paper, we propose a novel offline and online three-dimensional (3-D) deep learning integration framework by leveraging the 3-D fully convolutional network (3D-FCN) to tackle this challenging problem. Compared with the previous methods employing hand-crafted features or 2-D convolutional neural network, the 3D-FCN is capable of learning more representative spatio-temporal features from colonoscopy videos, and hence has more powerful discrimination capability. More importantly, we propose a novel online learning scheme to deal with the problem of limited training data by harnessing the specific information of an input video in the learning process. We integrate offline and online learning to effectively reduce the number of false positives generated by the offline network and further improve the detection performance. Extensive experiments on the dataset of MICCAI 2015 Challenge on Polyp Detection demonstrated the better performance of our method when compared with other competitors.
Collapse
|
27
|
Exploring Deep Learning and Transfer Learning for Colonic Polyp Classification. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2016; 2016:6584725. [PMID: 27847543 PMCID: PMC5101370 DOI: 10.1155/2016/6584725] [Citation(s) in RCA: 73] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2016] [Accepted: 10/04/2016] [Indexed: 12/26/2022]
Abstract
Recently, Deep Learning, especially through Convolutional Neural Networks (CNNs) has been widely used to enable the extraction of highly representative features. This is done among the network layers by filtering, selecting, and using these features in the last fully connected layers for pattern classification. However, CNN training for automated endoscopic image classification still provides a challenge due to the lack of large and publicly available annotated databases. In this work we explore Deep Learning for the automated classification of colonic polyps using different configurations for training CNNs from scratch (or full training) and distinct architectures of pretrained CNNs tested on 8-HD-endoscopic image databases acquired using different modalities. We compare our results with some commonly used features for colonic polyp classification and the good results suggest that features learned by CNNs trained from scratch and the “off-the-shelf” CNNs features can be highly relevant for automated classification of colonic polyps. Moreover, we also show that the combination of classical features and “off-the-shelf” CNNs features can be a good approach to further improve the results.
Collapse
|
28
|
Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1299-1312. [PMID: 26978662 DOI: 10.1109/tmi.2016.2535302] [Citation(s) in RCA: 985] [Impact Index Per Article: 123.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data.
Collapse
|
29
|
Tajbakhsh N, Gurudu SR, Liang J. Automated Polyp Detection in Colonoscopy Videos Using Shape and Context Information. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:630-44. [PMID: 26462083 DOI: 10.1109/tmi.2015.2487997] [Citation(s) in RCA: 231] [Impact Index Per Article: 28.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
This paper presents the culmination of our research in designing a system for computer-aided detection (CAD) of polyps in colonoscopy videos. Our system is based on a hybrid context-shape approach, which utilizes context information to remove non-polyp structures and shape information to reliably localize polyps. Specifically, given a colonoscopy image, we first obtain a crude edge map. Second, we remove non-polyp edges from the edge map using our unique feature extraction and edge classification scheme. Third, we localize polyp candidates with probabilistic confidence scores in the refined edge maps using our novel voting scheme. The suggested CAD system has been tested using two public polyp databases, CVC-ColonDB, containing 300 colonoscopy images with a total of 300 polyp instances from 15 unique polyps, and ASU-Mayo database, which is our collection of colonoscopy videos containing 19,400 frames and a total of 5,200 polyp instances from 10 unique polyps. We have evaluated our system using free-response receiver operating characteristic (FROC) analysis. At 0.1 false positives per frame, our system achieves a sensitivity of 88.0% for CVC-ColonDB and a sensitivity of 48% for the ASU-Mayo database. In addition, we have evaluated our system using a new detection latency analysis where latency is defined as the time from the first appearance of a polyp in the colonoscopy video to the time of its first detection by our system. At 0.05 false positives per frame, our system yields a polyp detection latency of 0.3 seconds.
Collapse
|
30
|
El Khatib A, Werghi N, Al-Ahmad H. Automatic polyp detection: A comparative study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2015:2669-2672. [PMID: 26736841 DOI: 10.1109/embc.2015.7318941] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this work we present a performance comparison between a set of different state-of-the-art image descriptors for the automatic detection of polyps in colonoscopy videos. This set includes: Local binary patterns, 2-dimensional Gabor filters, wavelet-based texture, and histogram of oriented gradients. We use these descriptors in conjunction with support vector machine or nearest neighbor classifiers to classify candidate regions, which in turn are selected using the maximally stable extremal regions algorithm. We present performance scores on the ASU-Mayo Clinic polyp database.
Collapse
|
31
|
Nawarathna R, Oh J, Muthukudage J, Tavanapong W, Wong J, de Groen PC, Tang SJ. Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns. Neurocomputing 2014; 144:70-91. [PMID: 25132723 DOI: 10.1016/j.neucom.2014.02.064] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician's time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a "texton histogram" of an image block as features. The histogram captures the distribution of different "textons" representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images.
Collapse
Affiliation(s)
- Ruwan Nawarathna
- Department of Computer Science and Engineering, University of North Texas, Denton, TX 76203, U.S.A
| | - JungHwan Oh
- Department of Computer Science and Engineering, University of North Texas, Denton, TX 76203, U.S.A
| | - Jayantha Muthukudage
- Department of Computer Science and Engineering, University of North Texas, Denton, TX 76203, U.S.A
| | | | - Johnny Wong
- Computer Science Department, Iowa State University, Ames, IA 50011, U.S.A
| | | | - Shou Jiang Tang
- University of Mississippi Medical Center, Jackson, MS 39216, U.S.A
| |
Collapse
|