1
|
Liu Q, Han Z, Liu Z, Zhang J. HMA-Net: A deep U-shaped network combined with HarDNet and multi-attention mechanism for medical image segmentation. Med Phys 2023; 50:1635-1646. [PMID: 36303466 DOI: 10.1002/mp.16065] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 09/14/2022] [Accepted: 10/11/2021] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Automatic segmentation of lesion, organ, and tissue from the medical image is an important part of medical image analysis, which are useful for improving the accuracy of disease diagnosis and clinical analysis. For skin melanomas lesions, the contrast ratio between lesions and surrounding skin is low and there are many irregular shapes, uneven distribution, and local and boundary features. Moreover, some hair covering the lesions destroys the local context. Polyp characteristics such as shape, size, and appearance vary at different development stages. Early polyps with small sizes have no distinctive features and could be easily mistaken for other intestinal structures, such as wrinkles and folds. Imaging positions and illumination conditions would alter polyps' appearance and lead to no visible transitions between polyps and surrounding tissue. It remains a challenging task to accurately segment the skin lesions and polyps due to the high variability in the location, shape, size, color, and texture of the target object. Developing a robust and accurate segmentation method for medical images is necessary. PURPOSE To achieve better segmentation performance while dealing with the difficulties above, a U-shape network based on the encoder and decoder structure is proposed to enhance the segmentation performance in target regions. METHODS In this paper, a novel deep network of the encoder-decoder model that combines HarDNet, dual attention (DA), and reverse attention (RA) is proposed. First, HarDNet68 is employed to extract the backbone features while improving the inference speed and computational efficiency. Second, the DA block is adopted to capture the global feature dependency in spatial and channel dimensions, and enrich the contextual information on local features. At last, three RA blocks are exploited to fuse and refine the boundary features to obtain the final segmentation results. RESULTS Extensive experiments are conducted on a skin lesion dataset which consists of ISIC2016, ISIC2017, and ISIC 2018, and a polyp dataset which consists of several public datasets, that is, Kvasir, CVC-ClinicDB, CVC-ColonDB, ETIS, Endosece. The proposed method outperforms some state-of-art segmentation models on the ISIC2018, ISIC2017, and ISIC2016 datasets, with Jaccard's indexes of 0.846, 0.881, and 0.894, mean Dice coefficients of 0.907, 0.929, and 03939, precisions of 0.908, 0.977, and 0.968, and accuracies of 0.953, 0.975, and 0.972. Additionally, the proposed method also performs better than some state-of-art segmentation models on the Kvasir, CVC-ClinicDB, CVC-ColonDB, ETIS, and Endosece datasets, with mean Dice coefficients of 0.907, 0.935, 0.716, 0.667, and 0.887, mean intersection over union coefficients of 0.850, 0.885, 0.644, 0.595, and 0.821, structural similarity measures of 0.918, 0.953, 0.823, 0.807, and 0.933, enhanced alignment measures of 0.952, 0.983, 0.850, 0.817, and 0.957, mean absolute errors of 0.026, 0.007, 0.037, 0.030, and 0.009. CONCLUSIONS The proposed deep network could improve lesion segmentation performance in polyp and skin lesion images. The quantitative and qualitative results show that the proposed method can effectively handle the challenging task of segmentation while revealing the great potential for clinical application.
Collapse
Affiliation(s)
- Qiaohong Liu
- School of Medical Instruments, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Ziqi Han
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Ziling Liu
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Juan Zhang
- School of Electronic and Electrical Engineering, Control Engineering, Shanghai University of Engineering Science, Shanghai, China
| |
Collapse
|
2
|
DBE-Net: Dual Boundary-Guided Attention Exploration Network for Polyp Segmentation. Diagnostics (Basel) 2023; 13:diagnostics13050896. [PMID: 36900040 PMCID: PMC10001089 DOI: 10.3390/diagnostics13050896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 02/22/2023] [Accepted: 02/23/2023] [Indexed: 03/02/2023] Open
Abstract
Automatic segmentation of polyps during colonoscopy can help doctors accurately find the polyp area and remove abnormal tissues in time to reduce the possibility of polyps transforming into cancer. However, the current polyp segmentation research still has the following problems: blurry polyp boundaries, multi-scale adaptability of polyps, and close resemblances between polyps and nearby normal tissues. To tackle these issues, this paper proposes a dual boundary-guided attention exploration network (DBE-Net) for polyp segmentation. Firstly, we propose a dual boundary-guided attention exploration module to solve the boundary-blurring problem. This module uses a coarse-to-fine strategy to progressively approximate the real polyp boundary. Secondly, a multi-scale context aggregation enhancement module is introduced to accommodate the multi-scale variation of polyps. Finally, we propose a low-level detail enhancement module, which can extract more low-level details and promote the performance of the overall network. Extensive experiments on five polyp segmentation benchmark datasets show that our method achieves superior performance and stronger generalization ability than state-of-the-art methods. Especially for CVC-ColonDB and ETIS, two challenging datasets among the five datasets, our method achieves excellent results of 82.4% and 80.6% in terms of mDice (mean dice similarity coefficient) and improves by 5.1% and 5.9% compared to the state-of-the-art methods.
Collapse
|
3
|
Ali H, Sharif M, Yasmin M, Rehmani MH. A shallow extraction of texture features for classification of abnormal video endoscopy frames. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
4
|
Investigating the significance of color space for abnormality detection in wireless capsule endoscopy images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103624] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
5
|
Goel N, Kaur S, Gunjan D, Mahapatra SJ. Dilated CNN for abnormality detection in wireless capsule endoscopy images. Soft comput 2022. [DOI: 10.1007/s00500-021-06546-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
6
|
Ayyaz MS, Lali MIU, Hussain M, Rauf HT, Alouffi B, Alyami H, Wasti S. Hybrid Deep Learning Model for Endoscopic Lesion Detection and Classification Using Endoscopy Videos. Diagnostics (Basel) 2021; 12:diagnostics12010043. [PMID: 35054210 PMCID: PMC8775223 DOI: 10.3390/diagnostics12010043] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 12/22/2021] [Accepted: 12/23/2021] [Indexed: 02/06/2023] Open
Abstract
In medical imaging, the detection and classification of stomach diseases are challenging due to the resemblance of different symptoms, image contrast, and complex background. Computer-aided diagnosis (CAD) plays a vital role in the medical imaging field, allowing accurate results to be obtained in minimal time. This article proposes a new hybrid method to detect and classify stomach diseases using endoscopy videos. The proposed methodology comprises seven significant steps: data acquisition, preprocessing of data, transfer learning of deep models, feature extraction, feature selection, hybridization, and classification. We selected two different CNN models (VGG19 and Alexnet) to extract features. We applied transfer learning techniques before using them as feature extractors. We used a genetic algorithm (GA) in feature selection, due to its adaptive nature. We fused selected features of both models using a serial-based approach. Finally, the best features were provided to multiple machine learning classifiers for detection and classification. The proposed approach was evaluated on a personally collected dataset of five classes, including gastritis, ulcer, esophagitis, bleeding, and healthy. We observed that the proposed technique performed superbly on Cubic SVM with 99.8% accuracy. For the authenticity of the proposed technique, we considered these statistical measures: classification accuracy, recall, precision, False Negative Rate (FNR), Area Under the Curve (AUC), and time. In addition, we provided a fair state-of-the-art comparison of our proposed technique with existing techniques that proves its worthiness.
Collapse
Affiliation(s)
- M Shahbaz Ayyaz
- Department of Computer Science, University of Gujrat, Gujrat 50700, Pakistan; (M.S.A.); (M.H.)
| | - Muhammad Ikram Ullah Lali
- Department of Information Sciences, University of Education Lahore, Lahore 41000, Pakistan; (M.I.U.L.); (S.W.)
| | - Mubbashar Hussain
- Department of Computer Science, University of Gujrat, Gujrat 50700, Pakistan; (M.S.A.); (M.H.)
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
- Correspondence:
| | - Bader Alouffi
- Department of Computer Science, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia; (B.A.); (H.A.)
| | - Hashem Alyami
- Department of Computer Science, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia; (B.A.); (H.A.)
| | - Shahbaz Wasti
- Department of Information Sciences, University of Education Lahore, Lahore 41000, Pakistan; (M.I.U.L.); (S.W.)
| |
Collapse
|
7
|
Amiri Z, Hassanpour H, Beghdadi A. A Computer-Aided Method for Digestive System Abnormality Detection in WCE Images. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:7863113. [PMID: 34707798 PMCID: PMC8545542 DOI: 10.1155/2021/7863113] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/25/2021] [Accepted: 10/06/2021] [Indexed: 12/01/2022]
Abstract
Wireless capsule endoscopy (WCE) is a powerful tool for the diagnosis of gastrointestinal diseases. The output of this tool is in video with a length of about eight hours, containing about 8000 frames. It is a difficult task for a physician to review all of the video frames. In this paper, a new abnormality detection system for WCE images is proposed. The proposed system has four main steps: (1) preprocessing, (2) region of interest (ROI) extraction, (3) feature extraction, and (4) classification. In ROI extraction, at first, distinct areas are highlighted and nondistinct areas are faded by using the joint normal distribution; then, distinct areas are extracted as an ROI segment by considering a threshold. The main idea is to extract abnormal areas in each frame. Therefore, it can be used to extract various lesions in WCE images. In the feature extraction step, three different types of features (color, texture, and shape) are employed. Finally, the features are classified using the support vector machine. The proposed system was tested on the Kvasir-Capsule dataset. The proposed system can detect multiple lesions from WCE frames with high accuracy.
Collapse
Affiliation(s)
- Zahra Amiri
- Image Processing and Data Mining Lab, Shahrood University of Technology, Shahrood, Iran
| | - Hamid Hassanpour
- Image Processing and Data Mining Lab, Shahrood University of Technology, Shahrood, Iran
| | - Azeddine Beghdadi
- Department of Computer Science and Engineering, University Sorbonne Paris Nord, Villetaneuse, France
| |
Collapse
|
8
|
Guo X, Zhang L, Hao Y, Zhang L, Liu Z, Liu J. Multiple abnormality classification in wireless capsule endoscopy images based on EfficientNet using attention mechanism. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2021; 92:094102. [PMID: 34598534 DOI: 10.1063/5.0054161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 08/13/2021] [Indexed: 06/13/2023]
Abstract
The wireless capsule endoscopy (WCE) procedure produces tens of thousands of images of the digestive tract, for which the use of the manual reading process is full of challenges. Convolutional neural networks are used to automatically detect lesions in WCE images. However, studies on clinical multilesion detection are scarce, and it is difficult to effectively balance the sensitivity to multiple lesions. A strategy for detecting multiple lesions is proposed, wherein common vascular and inflammatory lesions can be automatically and quickly detected on capsule endoscopic images. Based on weakly supervised learning, EfficientNet is fine-tuned to extract the endoscopic image features. Combining spatial features and channel features, the proposed attention network is then used as a classifier to obtain three classifications. The accuracy and speed of the model were compared with those of the ResNet121 and InceptionNetV4 models. It was tested on a public WCE image dataset obtained from 4143 subjects. On the computer-assisted diagnosis for capsule endoscopy database, the method gives a sensitivity of 96.67% for vascular lesions and 93.33% for inflammatory lesions. The precision for vascular lesions was 92.80%, and that for inflammatory lesions was 95.73%. The accuracy was 96.11%, which is 1.11% higher than that of the latest InceptionNetV4 network. Prediction for an image only requires 14 ms, which balances the accuracy and speed comparatively better. This strategy can be used as an auxiliary diagnostic method for specialists for the rapid reading of clinical capsule endoscopes.
Collapse
Affiliation(s)
- Xudong Guo
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Lulu Zhang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Youguo Hao
- Department of Rehabilitation, Shanghai Putuo People's Hospital, Shanghai 200060, China
| | - Linqi Zhang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Zhang Liu
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Jiannan Liu
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| |
Collapse
|
9
|
|
10
|
Tziortziotis I, Laskaratos FM, Coda S. Role of Artificial Intelligence in Video Capsule Endoscopy. Diagnostics (Basel) 2021; 11:1192. [PMID: 34209029 PMCID: PMC8303156 DOI: 10.3390/diagnostics11071192] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Accepted: 06/28/2021] [Indexed: 02/06/2023] Open
Abstract
Capsule endoscopy (CE) has been increasingly utilised in recent years as a minimally invasive tool to investigate the whole gastrointestinal (GI) tract and a range of capsules are currently available for evaluation of upper GI, small bowel, and lower GI pathology. Although CE is undoubtedly an invaluable test for the investigation of small bowel pathology, it presents considerable challenges and limitations, such as long and laborious reading times, risk of missing lesions, lack of bowel cleansing score and lack of locomotion. Artificial intelligence (AI) seems to be a promising tool that may help improve the performance metrics of CE, and consequently translate to better patient care. In the last decade, significant progress has been made to apply AI in the field of endoscopy, including CE. Although it is certain that AI will find soon its place in day-to-day endoscopy clinical practice, there are still some open questions and barriers limiting its widespread application. In this review, we provide some general information about AI, and outline recent advances in AI and CE, issues around implementation of AI in medical practice and potential future applications of AI-aided CE.
Collapse
Affiliation(s)
- Ioannis Tziortziotis
- Endoscopy Unit, Digestive Diseases Centre, Queen's Hospital, Barking Havering and Redbridge University Hospitals NHS Trust, Rom Valley Way, Romford, London RM7 0AG, UK
| | - Faidon-Marios Laskaratos
- Endoscopy Unit, Digestive Diseases Centre, Queen's Hospital, Barking Havering and Redbridge University Hospitals NHS Trust, Rom Valley Way, Romford, London RM7 0AG, UK
| | - Sergio Coda
- Endoscopy Unit, Digestive Diseases Centre, Queen's Hospital, Barking Havering and Redbridge University Hospitals NHS Trust, Rom Valley Way, Romford, London RM7 0AG, UK
- Photonics Group-Department of Physics, Imperial College London, Exhibition Rd, South Kensington, London SW7 2BX, UK
| |
Collapse
|
11
|
Naz J, Sharif M, Yasmin M, Raza M, Khan MA. Detection and Classification of Gastrointestinal Diseases using Machine Learning. Curr Med Imaging 2021; 17:479-490. [PMID: 32988355 DOI: 10.2174/1573405616666200928144626] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Traditional endoscopy is an invasive and painful method of examining the gastrointestinal tract (GIT) not supported by physicians and patients. To handle this issue, video endoscopy (VE) or wireless capsule endoscopy (WCE) is recommended and utilized for GIT examination. Furthermore, manual assessment of captured images is not possible for an expert physician because it's a time taking task to analyze thousands of images thoroughly. Hence, there comes the need for a Computer-Aided-Diagnosis (CAD) method to help doctors analyze images. Many researchers have proposed techniques for automated recognition and classification of abnormality in captured images. METHODS In this article, existing methods for automated classification, segmentation and detection of several GI diseases are discussed. Paper gives a comprehensive detail about these state-of-theart methods. Furthermore, literature is divided into several subsections based on preprocessing techniques, segmentation techniques, handcrafted features based techniques and deep learning based techniques. Finally, issues, challenges and limitations are also undertaken. RESULTS A comparative analysis of different approaches for the detection and classification of GI infections. CONCLUSION This comprehensive review article combines information related to a number of GI diseases diagnosis methods at one place. This article will facilitate the researchers to develop new algorithms and approaches for early detection of GI diseases detection with more promising results as compared to the existing ones of literature.
Collapse
Affiliation(s)
- Javeria Naz
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | | |
Collapse
|
12
|
Kumar R, Khan FU, Sharma A, Aziz IB, Poddar NK. Recent Applications of Artificial Intelligence in detection of Gastrointestinal, Hepatic and Pancreatic Diseases. Curr Med Chem 2021; 29:66-85. [PMID: 33820515 DOI: 10.2174/0929867328666210405114938] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 02/25/2021] [Accepted: 03/06/2021] [Indexed: 11/22/2022]
Abstract
There is substantial progress in artificial intelligence (AI) algorithms and their medical sciences applications in the last two decades. AI-assisted programs have already been established for remotely health monitoring using sensors and smartphones. A variety of AI-based prediction models available for the gastrointestinal inflammatory, non-malignant diseases, and bowel bleeding using wireless capsule endoscopy, electronic medical records for hepatitis-associated fibrosis, pancreatic carcinoma using endoscopic ultrasounds. AI-based models may be of immense help for healthcare professionals in the identification, analysis, and decision support using endoscopic images to establish prognosis and risk assessment of patient's treatment using multiple factors. Although enough randomized clinical trials are warranted to establish the efficacy of AI-algorithms assisted and non-AI based treatments before approval of such techniques from medical regulatory authorities. In this article, available AI approaches and AI-based prediction models for detecting gastrointestinal, hepatic, and pancreatic diseases are reviewed. The limitation of AI techniques in such disease prognosis, risk assessment, and decision support are discussed.
Collapse
Affiliation(s)
- Rajnish Kumar
- Amity Institute of Biotechnology, Amity University Uttar Pradesh Lucknow Campus, Uttar Pradesh. India
| | - Farhat Ullah Khan
- Computer and Information Sciences Department, Universiti Teknologi Petronas, 32610, Seri Iskander, Perak. Malaysia
| | - Anju Sharma
- Department of Applied Science, Indian Institute of Information Technology, Allahabad, Uttar Pradesh. India
| | - Izzatdin Ba Aziz
- Computer and Information Sciences Department, Universiti Teknologi Petronas, 32610, Seri Iskander, Perak. Malaysia
| | | |
Collapse
|
13
|
Real-time automatic polyp detection in colonoscopy using feature enhancement module and spatiotemporal similarity correlation unit. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102503] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
14
|
Wang S, Cong Y, Zhu H, Chen X, Qu L, Fan H, Zhang Q, Liu M. Multi-Scale Context-Guided Deep Network for Automated Lesion Segmentation With Endoscopy Images of Gastrointestinal Tract. IEEE J Biomed Health Inform 2021; 25:514-525. [PMID: 32750912 DOI: 10.1109/jbhi.2020.2997760] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Accurate lesion segmentation based on endoscopy images is a fundamental task for the automated diagnosis of gastrointestinal tract (GI Tract) diseases. Previous studies usually use hand-crafted features for representing endoscopy images, while feature definition and lesion segmentation are treated as two standalone tasks. Due to the possible heterogeneity between features and segmentation models, these methods often result in sub-optimal performance. Several fully convolutional networks have been recently developed to jointly perform feature learning and model training for GI Tract disease diagnosis. However, they generally ignore local spatial details of endoscopy images, as down-sampling operations (e.g., pooling and convolutional striding) may result in irreversible loss of image spatial information. To this end, we propose a multi-scale context-guided deep network (MCNet) for end-to-end lesion segmentation of endoscopy images in GI Tract, where both global and local contexts are captured as guidance for model training. Specifically, one global subnetwork is designed to extract the global structure and high-level semantic context of each input image. Then we further design two cascaded local subnetworks based on output feature maps of the global subnetwork, aiming to capture both local appearance information and relatively high-level semantic information in a multi-scale manner. Those feature maps learned by three subnetworks are further fused for the subsequent task of lesion segmentation. We have evaluated the proposed MCNet on 1,310 endoscopy images from the public EndoVis-Ab and CVC-ClinicDB datasets for abnormal segmentation and polyp segmentation, respectively. Experimental results demonstrate that MCNet achieves [Formula: see text] and [Formula: see text] mean intersection over union (mIoU) on two datasets, respectively, outperforming several state-of-the-art approaches in automated lesion segmentation with endoscopy images of GI Tract.
Collapse
|
15
|
Liaqat A, Khan MA, Sharif M, Mittal M, Saba T, Manic KS, Al Attar FNH. Gastric Tract Infections Detection and Classification from Wireless Capsule Endoscopy using Computer Vision Techniques: A Review. Curr Med Imaging 2021; 16:1229-1242. [PMID: 32334504 DOI: 10.2174/1573405616666200425220513] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 01/14/2020] [Accepted: 01/30/2020] [Indexed: 11/22/2022]
Abstract
Recent facts and figures published in various studies in the US show that approximately
27,510 new cases of gastric infections are diagnosed. Furthermore, it has also been reported that
the mortality rate is quite high in diagnosed cases. The early detection of these infections can save
precious human lives. As the manual process of these infections is time-consuming and expensive,
therefore automated Computer-Aided Diagnosis (CAD) systems are required which helps the endoscopy
specialists in their clinics. Generally, an automated method of gastric infection detections
using Wireless Capsule Endoscopy (WCE) is comprised of the following steps such as contrast preprocessing,
feature extraction, segmentation of infected regions, and classification into their relevant
categories. These steps consist of various challenges that reduce the detection and recognition
accuracy as well as increase the computation time. In this review, authors have focused on the importance
of WCE in medical imaging, the role of endoscopy for bleeding-related infections, and
the scope of endoscopy. Further, the general steps and highlighting the importance of each step
have been presented. A detailed discussion and future directions have been provided at the end.
Collapse
Affiliation(s)
- Amna Liaqat
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | | | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Mamta Mittal
- Department of Computer Science & Engineering, G.B. Pant Govt. Engineering College, New Delhi, India
| | - Tanzila Saba
- Department of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia
| | - K. Suresh Manic
- Department of Electrical & Computer Engineering, National University of Science & Technology, Muscat, Oman
| | | |
Collapse
|
16
|
Use of artificial intelligence for detection of gastric lesions by magnetically controlled capsule endoscopy. Gastrointest Endosc 2021; 93:133-139.e4. [PMID: 32470426 DOI: 10.1016/j.gie.2020.05.027] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Accepted: 05/01/2020] [Indexed: 02/08/2023]
Abstract
BACKGROUND AND AIMS Magnetically controlled capsule endoscopy (MCE) has become an efficient diagnostic modality for gastric diseases. We developed a novel automatic gastric lesion detection system to assist in diagnosis and reduce inter-physician variations. This study aimed to evaluate the diagnostic capability of the computer-aided detection system for MCE images. METHODS We developed a novel automatic gastric lesion detection system based on a convolutional neural network (CNN) and faster region-based convolutional neural network (RCNN). A total of 1,023,955 MCE images from 797 patients were used to train and test the system. These images were divided into 7 categories (erosion, polyp, ulcer, submucosal tumor, xanthoma, normal mucosa, and invalid images). The primary endpoint was the sensitivity of the system. RESULTS The system detected gastric focal lesions with 96.2% sensitivity (95% confidence interval [CI], 95.7%-96.5%), 76.2% specificity (95% CI, 75.97%-76.3%), 16.0% positive predictive value (95% CI, 15.7%-16.3%), 99.7% negative predictive value (95% CI, 99.74%-99.79%), and 77.1% accuracy (95% CI, 76.9%-77.3%) (sensitivity was 99.3% for erosions; 96.5% for polyps; 89.3% for ulcers; 87.2% for submucosal tumors; 90.6% for xanthomas; 67.8% for normal; and 96.1% for invalid images). Analysis of the receiver operating characteristic curve showed that the area under the curve for all positive images was 0.84. Image processing time was 44 milliseconds per image for the system and 0.38 ± 0.29 seconds per image for clinicians (P < .001). The kappa value of 2 times repeated reads was 1. CONCLUSIONS The CNN faster-RCNN-based diagnostic program system showed good performance in diagnosing gastric focal lesions in MCE images.
Collapse
|
17
|
Meher D, Gogoi M, Bharali P, Anirvan P, Singh SP. Artificial Intelligence in Small Bowel Endoscopy: Current Perspectives and Future Directions. JOURNAL OF DIGESTIVE ENDOSCOPY 2020. [DOI: 10.1055/s-0040-1717824] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
AbstractArtificial intelligence (AI) is a computer system that is able to perform tasks which normally require human intelligence. The role of AI in the field of gastroenterology has been gradually evolving since its inception in the 1950s. Discovery of wireless capsule endoscopy (WCE) and balloon enteroscopy (BE) has revolutionized small gut imaging. While WCE is a relatively patient-friendly and noninvasive mode to examine the nonobstructed small gut, it is limited by a lengthy examination time and the need for expertise in reading images acquired by the capsule. Similarly, BE, despite having the advantage of therapeutic intervention, is costly, invasive, and requires general sedation. Incorporation of concepts like machine learning and deep learning has been used to handle large amounts of data and images in gastroenterology. Interestingly, in small gut imaging, the application of AI has been limited to WCE only. This review was planned to examine and summarize available published data on various AI-based approaches applied to small bowel disease.
We conducted an extensive literature search using Google search engine, Google Scholar, and PubMed database for published literature in English on the application of different AI techniques in small bowel endoscopy, and have summarized the outcome and benefits of these applications of AI in small bowel endoscopy. Incorporation of AI in WCE has resulted in significant advancements in the detection of various lesions starting from dysplastic mucosa, inflammatory and nonmalignant lesions to the detection of bleeding with increasing accuracy and has shortened the lengthy review time in image analysis. As most of the studies to evaluate AI are retrospective, the presence of inherent selection bias cannot be excluded. Besides, the interpretability (black-box nature) of AI models remains a cause for concern. Finally, issues related to medical ethics and AI need to be judiciously addressed to enable its seamless use in future.
Collapse
Affiliation(s)
- Dinesh Meher
- Department of Gastroenterology, S.C.B. Medical College, Cuttack, Odisha, India
| | - Mrinal Gogoi
- Department of Gastroenterology, S.C.B. Medical College, Cuttack, Odisha, India
| | - Pankaj Bharali
- Department of Gastroenterology, S.C.B. Medical College, Cuttack, Odisha, India
| | - Prajna Anirvan
- Department of Gastroenterology, S.C.B. Medical College, Cuttack, Odisha, India
| | | |
Collapse
|
18
|
Jani KK, Srivastava R. A Survey on Medical Image Analysis in Capsule Endoscopy. Curr Med Imaging 2020; 15:622-636. [PMID: 32008510 DOI: 10.2174/1573405614666181102152434] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2018] [Revised: 10/14/2018] [Accepted: 10/22/2018] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Capsule Endoscopy (CE) is a non-invasive, patient-friendly alternative to conventional endoscopy procedure. However, CE produces 6 to 8 hrs long video posing a tedious challenge to a gastroenterologist for abnormality detection. Major challenges to an expert are lengthy videos, need of constant concentration and subjectivity of the abnormality. To address these challenges along with high diagnostic accuracy, design and development of automated abnormality detection system is a must. Machine learning and computer vision techniques are devised to develop such automated systems. METHODS Study presents a review of quality research papers published in IEEE, Scopus, and Science Direct database with search criteria as capsule endoscopy, engineering, and journal papers. The initial search retrieved 144 publications. After evaluating all articles, 62 publications pertaining to image analysis are selected. RESULTS This paper presents a rigorous review comprising all the aspects of medical image analysis concerning capsule endoscopy namely video summarization and redundant image elimination, Image enhancement and interpretation, segmentation and region identification, Computer-aided abnormality detection in capsule endoscopy, Image and video compression. The study provides a comparative analysis of various approaches, experimental setup, performance, strengths, and limitations of the aspects stated above. CONCLUSIONS The analyzed image analysis techniques for capsule endoscopy have not yet overcome all current challenges mainly due to lack of dataset and complex nature of the gastrointestinal tract.
Collapse
Affiliation(s)
- Kuntesh Ketan Jani
- Computer Science and Engineering Department, Indian Institute of Technology (Banaras Hindu University) Varanasi, Varanasi, Uttar Pradesh, India
| | - Rajeev Srivastava
- Computer Science and Engineering Department, Indian Institute of Technology (Banaras Hindu University) Varanasi, Varanasi, Uttar Pradesh, India
| |
Collapse
|
19
|
Rahim T, Usman MA, Shin SY. A survey on contemporary computer-aided tumor, polyp, and ulcer detection methods in wireless capsule endoscopy imaging. Comput Med Imaging Graph 2020; 85:101767. [DOI: 10.1016/j.compmedimag.2020.101767] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 07/13/2020] [Accepted: 07/18/2020] [Indexed: 12/12/2022]
|
20
|
Pham VT, Lin C, Tran TT, M Su MY, Lin YK, Nien CT, I Tseng WY, Lin JL, Lo MT, Lin LY. Predicting ventricular tachyarrhythmia in patients with systolic heart failure based on texture features of the gray zone from contrast-enhanced magnetic resonance imaging. J Cardiol 2020; 76:601-609. [PMID: 32675026 DOI: 10.1016/j.jjcc.2020.06.020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 06/11/2020] [Accepted: 06/15/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Previous research showed that gray zone detected by late gadolinium enhancement cardiovascular magnetic resonance (LGE-CMR) imaging could help identify high-risk patients. In this study, we investigated whether LGE-CMR gray zone heterogeneity measured by image texture features could predict cardiovascular events in patients with heart failure (HF). METHOD This is a retrospective cohort study. Patients with systolic HF undergoing CMR imaging were enrolled. Cine and LGE images were analyzed to derive left ventricular (LV) function and scar characteristics. Entropy and uniformity of gray zones were derived by texture analysis. RESULTS A total of 82 systolic HF patients were enrolled. After a median 1021 (25%-75% quartiles, 205-2066) days of follow-up, the entropy (0.60 ± 0.260 vs. 0.87 ± 0.28, p = 0.013) was significantly increased while the uniformity (0.68 ± 0.14 vs. 0.53±0.15, p = 0.016) was significantly decreased in patients with ventricular tachycardia or ventricular fibrillation (VT/VF). The percentage of core scar (21.9 ± 10.6 vs. 30.6 ± 10.4, p = 0.029) was higher in cardiac mortality group than survival group while the uniformity (0.55 ± 0.17 vs. 0.67 ± 0.14, p = 0.018) was lower in cardiac mortality group than survival group. A multivariate Cox regression model showed that higher percentage of gray zone area (HR = 8.805, 1.620-47.84, p = 0.045), higher entropy (>0.85) (HR = 1.391, 1.092-1.772, p = 0.024) and lower uniformity (≦0.54) (HR = 0.535, 0.340-0.842, p = 0.022) were associated with VT/VF attacks. Also, higher percentage of gray zone area (HR = 5.716, 1.379-23.68, p = 0.017), core scar zone (HR = 1.939, 1.056-3.561, p = 0.025), entropy (>0.85) (HR = 1.434, 1.076-1.911, p = 0.008) and lower uniformity (≦0.54) (HR = 0.513, 0.296-0.888, p = 0.009) were associated with cardiac mortality during follow-up. CONCLUSIONS Gray zone heterogeneity by texture analysis method could provide additional prognostic value to traditional LGE-CMR substrate analysis method.
Collapse
Affiliation(s)
- Van-Truong Pham
- School of Electrical Engineering, Hanoi University of Science and Technology, Hanoi, Vietnam; Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan
| | - Chen Lin
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan; Department of Internal Medicine, National Taiwan University College of Medicine and Hospital, Taipei, Taiwan.
| | - Thi-Thao Tran
- School of Electrical Engineering, Hanoi University of Science and Technology, Hanoi, Vietnam; Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan
| | - Mao-Yuan M Su
- Department of Medical Imaging, National Taiwan University Hospital, Taipei, Taiwan
| | - Ying-Kuang Lin
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan; Department of Medicine, Taiwan Landseed Hospital, Taoyuan, Taiwan
| | - Chun-Tung Nien
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan; Department of Medicine, Taiwan Landseed Hospital, Taoyuan, Taiwan
| | - Wen-Yih I Tseng
- Department of Medical Imaging, National Taiwan University Hospital, Taipei, Taiwan; Center for Optoelectronic Biomedicine, National Taiwan University College of Medicine, Taipei, Taiwan
| | - Jiunn-Lee Lin
- Department of Internal Medicine, National Taiwan University College of Medicine and Hospital, Taipei, Taiwan
| | - Men-Tzung Lo
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan
| | - Lian-Yu Lin
- Department of Internal Medicine, National Taiwan University College of Medicine and Hospital, Taipei, Taiwan.
| |
Collapse
|
21
|
de Groen PC. Using artificial intelligence to improve adequacy of inspection in gastrointestinal endoscopy. ACTA ACUST UNITED AC 2020. [DOI: 10.1016/j.tgie.2019.150640] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
22
|
Kundu AK, Fattah SA, Wahid KA. Multiple Linear Discriminant Models for Extracting Salient Characteristic Patterns in Capsule Endoscopy Images for Multi-Disease Detection. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2020; 8:3300111. [PMID: 32190429 PMCID: PMC7062148 DOI: 10.1109/jtehm.2020.2964666] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Revised: 11/05/2019] [Accepted: 12/03/2019] [Indexed: 01/01/2023]
Abstract
Background: Computer-aided disease detection schemes from wireless capsule endoscopy (WCE) videos have received great attention by the researchers for reducing physicians’ burden due to the time-consuming and risky manual review process. While single disease classification schemes are greatly dealt by the researchers in the past, developing a unified scheme which is capable of detecting multiple gastrointestinal (GI) diseases is very challenging due to the highly irregular behavior of diseased images in terms of color patterns. Method: In this paper, a computer-aided method is developed to detect multiple GI diseases from WCE videos utilizing linear discriminant analysis (LDA) based region of interest (ROI) separation scheme followed by a probabilistic model fitting approach. Commonly in training phase, as pixel-labeled images are available in small number, only the image-level annotations are used for detecting diseases in WCE images, whereas pixel-level knowledge, although a major source for learning the disease characteristics, is left unused. In view of learning the characteristic disease patterns from pixel-labeled images, a set of LDA models are trained which are later used to extract the salient ROI from WCE images both in training and testing stages. The intensity patterns of ROI are then modeled by a suitable probability distribution and the fitted parameters of the distribution are utilized as features in a supervised cascaded classification scheme. Results: For the purpose of validation of the proposed multi-disease detection scheme, a set of pixel-labeled images of bleeding, ulcer and tumor are used to extract the LDA models and then, a large WCE dataset is used for training and testing. A high level of accuracy is achieved even with a small number of pixel-labeled images. Conclusion: Therefore, the proposed scheme is expected to help physicians in reviewing a large number of WCE images to diagnose different GI diseases.
Collapse
Affiliation(s)
- Amit Kumar Kundu
- 1Department of Electrical and Electronic EngineeringBangladesh University of Engineering and TechnologyDhaka1205Bangladesh
| | - Shaikh Anowarul Fattah
- 1Department of Electrical and Electronic EngineeringBangladesh University of Engineering and TechnologyDhaka1205Bangladesh
| | - Khan A Wahid
- 2Department of Electrical and Computer EngineeringUniversity of SaskatchewanSaskatoonSKS7N 5A9Canada
| |
Collapse
|
23
|
Multiple abnormality detection for automatic medical image diagnosis using bifurcated convolutional neural network. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101792] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
24
|
Le Berre C, Sandborn WJ, Aridhi S, Devignes MD, Fournier L, Smaïl-Tabbone M, Danese S, Peyrin-Biroulet L. Application of Artificial Intelligence to Gastroenterology and Hepatology. Gastroenterology 2020; 158:76-94.e2. [PMID: 31593701 DOI: 10.1053/j.gastro.2019.08.058] [Citation(s) in RCA: 285] [Impact Index Per Article: 71.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 08/22/2019] [Accepted: 08/24/2019] [Indexed: 02/07/2023]
Abstract
Since 2010, substantial progress has been made in artificial intelligence (AI) and its application to medicine. AI is explored in gastroenterology for endoscopic analysis of lesions, in detection of cancer, and to facilitate the analysis of inflammatory lesions or gastrointestinal bleeding during wireless capsule endoscopy. AI is also tested to assess liver fibrosis and to differentiate patients with pancreatic cancer from those with pancreatitis. AI might also be used to establish prognoses of patients or predict their response to treatments, based on multiple factors. We review the ways in which AI may help physicians make a diagnosis or establish a prognosis and discuss its limitations, knowing that further randomized controlled studies will be required before the approval of AI techniques by the health authorities.
Collapse
Affiliation(s)
- Catherine Le Berre
- Institut des Maladies de l'Appareil Digestif, Nantes University Hospital, France; Institut National de la Santé et de la Recherche Médicale U954 and Department of Gastroenterology, Nancy University Hospital, University of Lorraine, France
| | | | - Sabeur Aridhi
- University of Lorraine, Le Centre National de la Recherche Scientifique, Inria, Laboratoire Lorrain de Recherche en Informatique et ses Applications, Nancy, France
| | - Marie-Dominique Devignes
- University of Lorraine, Le Centre National de la Recherche Scientifique, Inria, Laboratoire Lorrain de Recherche en Informatique et ses Applications, Nancy, France
| | - Laure Fournier
- Université Paris-Descartes, Institut National de la Santé et de la Recherche Médicale, Unité Mixte De Recherché S970, Assistance Publique-Hôpitaux de Paris, Paris, France
| | - Malika Smaïl-Tabbone
- University of Lorraine, Le Centre National de la Recherche Scientifique, Inria, Laboratoire Lorrain de Recherche en Informatique et ses Applications, Nancy, France
| | - Silvio Danese
- Inflammatory Bowel Disease Center and Department of Biomedical Sciences, Humanitas Clinical and Research Center, Humanitas University, Milan, Italy
| | - Laurent Peyrin-Biroulet
- Institut National de la Santé et de la Recherche Médicale U954 and Department of Gastroenterology, Nancy University Hospital, University of Lorraine, France.
| |
Collapse
|
25
|
The future of capsule endoscopy in clinical practice: from diagnostic to therapeutic experimental prototype capsules. GASTROENTEROLOGY REVIEW 2019; 15:179-193. [PMID: 33005262 PMCID: PMC7509905 DOI: 10.5114/pg.2019.87528] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Accepted: 06/17/2019] [Indexed: 02/08/2023]
Abstract
Capsule endoscopy (CE) is indicated as a first-line clinical examination for the detection of small-bowel pathology, and there is an ever-growing drive for it to become a method for the screening of the entire gastrointestinal tract (GI). Although CE's main function is diagnosis, the research for therapeutic capabilities has intensified to make therapeutic capsule endoscopy (TCE) a target within reach. This manuscript presents the research evolution of CE and TCE through the last 5 years and describes notable problems, as well as clinical and technological challenges to overcome. This review also reports the state-of-the-art of capsule devices with a focus on CE research prototypes promising an enhanced diagnostic yield (DY) and treatment. Lastly, this article provides an overview of the research progress made in software for enhancing DY by increasing the accuracy of abnormality detection and lesion localisation.
Collapse
|
26
|
Ali H, Sharif M, Yasmin M, Rehmani MH, Riaz F. A survey of feature extraction and fusion of deep learning for detection of abnormalities in video endoscopy of gastrointestinal-tract. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09743-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
27
|
Jani KK, Srivastava S, Srivastava R. Computer aided diagnosis system for ulcer detection in capsule endoscopy using optimized feature set. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-182883] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Kuntesh K. Jani
- Department of Computer Science and Engineering, Indian Institute of Technology (BHU), Varanasi, Uttar Pradesh, India
| | - Subodh Srivastava
- Department of Electronics and Communication Engineering, National Institute of Technology, Patna, Bihar, India
| | - Rajeev Srivastava
- Department of Computer Science and Engineering, Indian Institute of Technology (BHU), Varanasi, Uttar Pradesh, India
| |
Collapse
|
28
|
Alaskar H, Hussain A, Al-Aseem N, Liatsis P, Al-Jumeily D. Application of Convolutional Neural Networks for Automated Ulcer Detection in Wireless Capsule Endoscopy Images. SENSORS (BASEL, SWITZERLAND) 2019; 19:E1265. [PMID: 30871162 PMCID: PMC6471286 DOI: 10.3390/s19061265] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 02/24/2019] [Accepted: 03/08/2019] [Indexed: 12/17/2022]
Abstract
Detection of abnormalities in wireless capsule endoscopy (WCE) images is a challenging task. Typically, these images suffer from low contrast, complex background, variations in lesion shape and color, which affect the accuracy of their segmentation and subsequent classification. This research proposes an automated system for detection and classification of ulcers in WCE images, based on state-of-the-art deep learning networks. Deep learning techniques, and in particular, convolutional neural networks (CNNs), have recently become popular in the analysis and recognition of medical images. The medical image datasets used in this study were obtained from WCE video frames. In this work, two milestone CNN architectures, namely the AlexNet and the GoogLeNet are extensively evaluated in object classification into ulcer or non-ulcer. Furthermore, we examine and analyze the images identified as containing ulcer objects to evaluate the efficiency of the utilized CNNs. Extensive experiments show that CNNs deliver superior performance, surpassing traditional machine learning methods by large margins, which supports their effectiveness as automated diagnosis tools.
Collapse
Affiliation(s)
- Haya Alaskar
- Department of Computer Science, College of Computer Engineering and Sciences Prince Sattam Bin Abdulaziz University, Alkharj 11942, Saudi Arabia.
| | - Abir Hussain
- Department of Computer Science, Liverpool John Moores University, Liverpool L3 3AF, UK.
| | - Nourah Al-Aseem
- Department of Computer Science, College of Computer Engineering and Sciences Prince Sattam Bin Abdulaziz University, Alkharj 11942, Saudi Arabia.
| | - Panos Liatsis
- Department of Computer Science, Khalifa University of Science and Technology, Abu Dhabi 127788, UAE.
| | - Dhiya Al-Jumeily
- Department of Computer Science, Liverpool John Moores University, Liverpool L3 3AF, UK.
| |
Collapse
|
29
|
Vasilakakis M, Koulaouzidis A, Yung DE, Plevris JN, Toth E, Iakovidis DK. Follow-up on: optimizing lesion detection in small bowel capsule endoscopy and beyond: from present problems to future solutions. Expert Rev Gastroenterol Hepatol 2019; 13:129-141. [PMID: 30791780 DOI: 10.1080/17474124.2019.1553616] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
This review presents noteworthy advances in clinical and experimental Capsule Endoscopy (CE), focusing on the progress that has been reported over the last 5 years since our previous review on the subject. Areas covered: This study presents the commercially available CE platforms, as well as the advances made in optimizing the diagnostic capabilities of CE. The latter includes recent concept and prototype capsule endoscopes, medical approaches to improve diagnostic yield, and progress in software for enhancing visualization, abnormality detection, and lesion localization. Expert commentary: Currently, moving through the second decade of CE evolution, there are still several open issues and remarkable challenges to overcome.
Collapse
Affiliation(s)
- Michael Vasilakakis
- a Department of Computer Science and Biomedical Informatics , University of Thessaly , Lamia , Greece
| | - Anastasios Koulaouzidis
- b Endoscopy Unit , The Royal Infirmary of Edinburgh , Edinburgh , Scotland.,c Department of Clinical Sciences , Lund University , Malmö , Sweden
| | - Diana E Yung
- b Endoscopy Unit , The Royal Infirmary of Edinburgh , Edinburgh , Scotland
| | - John N Plevris
- b Endoscopy Unit , The Royal Infirmary of Edinburgh , Edinburgh , Scotland
| | - Ervin Toth
- c Department of Clinical Sciences , Lund University , Malmö , Sweden.,d Section of Gastroenterology, Department of Clinical Sciences , Skåne University Hospital Malmö , Malmö , Sweden
| | - Dimitris K Iakovidis
- a Department of Computer Science and Biomedical Informatics , University of Thessaly , Lamia , Greece
| |
Collapse
|
30
|
Polepaka S, Rao CS, Chandra Mohan M. IDSS-based Two stage classification of brain tumor using SVM. HEALTH AND TECHNOLOGY 2019. [DOI: 10.1007/s12553-018-00290-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
31
|
Zhang R, Zheng Y, Poon CC, Shen D, Lau JY. Polyp detection during colonoscopy using a regression-based convolutional neural network with a tracker. PATTERN RECOGNITION 2018; 83:209-219. [PMID: 31105338 PMCID: PMC6519928 DOI: 10.1016/j.patcog.2018.05.026] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
A computer-aided detection (CAD) tool for locating and detecting polyps can help reduce the chance of missing polyps during colonoscopy. Nevertheless, state-of-the-art algorithms were either computationally complex or suffered from low sensitivity and therefore unsuitable to be used in real clinical setting. In this paper, a novel regression-based Convolutional Neural Network (CNN) pipeline is presented for polyp detection during colonoscopy. The proposed pipeline was constructed in two parts: 1) to learn the spatial features of colorectal polyps, a fast object detection algorithm named ResYOLO was pre-trained with a large non-medical image database and further fine-tuned with colonoscopic images extracted from videos; and 2) temporal information was incorporated via a tracker named Efficient Convolution Operators (ECO) for refining the detection results given by ResYOLO. Evaluated on 17,574 frames extracted from 18 endoscopic videos of the AsuMayoDB, the proposed method was able to detect frames with polyps with a precision of 88.6%, recall of 71.6% and processing speed of 6.5 frames per second, i.e. the method can accurately locate polyps in more frames and at a faster speed compared to existing methods. In conclusion, the proposed method has great potential to be used to assist endoscopists in tracking polyps during colonoscopy.
Collapse
Affiliation(s)
- Ruikai Zhang
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| | - Yali Zheng
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| | - Carmen C.Y. Poon
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
- Corresponding author
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Corresponding author at: Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA.
| | - James Y.W. Lau
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
32
|
Liu D, Rao N, Mei X, Jiang H, Li Q, Luo C, Li Q, Zeng C, Zeng B, Gan T. Annotating Early Esophageal Cancers Based on Two Saliency Levels of Gastroscopic Images. J Med Syst 2018; 42:237. [PMID: 30327890 DOI: 10.1007/s10916-018-1063-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Accepted: 09/06/2018] [Indexed: 02/05/2023]
Abstract
Early diagnoses of esophageal cancer can greatly improve the survival rate of patients. At present, the lesion annotation of early esophageal cancers (EEC) in gastroscopic images is generally performed by medical personnel in a clinic. To reduce the effect of subjectivity and fatigue in manual annotation, computer-aided annotation is required. However, automated annotation of EEC lesions using images is a challenging task owing to the fine-grained variability in the appearance of EEC lesions. This study modifies the traditional EEC annotation framework and utilizes visual salient information to develop a two saliency levels-based lesion annotation (TSL-BLA) for EEC annotations on gastroscopic images. Unlike existing methods, the proposed framework has a strong ability of constraining false positive outputs. What is more, TSL-BLA is also placed an additional emphasis on the annotation of small EEC lesions. A total of 871 gastroscopic images from 231 patients were used to validate TSL-BLA. 365 of those images contain 434 EEC lesions and 506 images do not contain any lesions. 101 small lesion regions are extracted from the 434 lesions to further validate the performance of TSL-BLA. The experimental results show that the mean detection rate and Dice similarity coefficients of TSL-BLA were 97.24 and 75.15%, respectively. Compared with other state-of-the-art methods, TSL-BLA shows better performance. Moreover, it shows strong superiority when annotating small EEC lesions. It also produces fewer false positive outputs and has a fast running speed. Therefore, The proposed method has good application prospects in aiding clinical EEC diagnoses.
Collapse
Affiliation(s)
- Dingyun Liu
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Nini Rao
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China. .,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China. .,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China.
| | - Xinming Mei
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China.,Institute of Electronic and Information Engineering of UESTC in Guangdong, Dongguan, China
| | - Hongxiu Jiang
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Quanchi Li
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - ChengSi Luo
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Qian Li
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Chengshi Zeng
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Bing Zeng
- School of Communication and Information Engineering, University Electronic Science and Technology of China, Chengdu, China
| | - Tao Gan
- Digestive Endoscopic Center of West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
33
|
Yuan Y, Yao X, Han J, Guo L, Meng MQH. Discriminative Joint-Feature Topic Model With Dual Constraints for WCE Classification. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:2074-2085. [PMID: 28749365 DOI: 10.1109/tcyb.2017.2726818] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Wireless capsule endoscopy (WCE) enables clinicians to examine the digestive tract without any surgical operations, at the cost of a large amount of images to be analyzed. The main challenge for automatic computer-aided diagnosis arises from the difficulty of robust characterization of these images. To tackle this problem, a novel discriminative joint-feature topic model (DJTM) with dual constraints is proposed to classify multiple abnormalities in WCE images. We first propose a joint-feature probabilistic latent semantic analysis (PLSA) model, where color and texture descriptors extracted from same image patches are jointly modeled with their conditional distributions. Then the proposed dual constraints: visual words importance and local image manifold are embedded into the joint-feature PLSA model simultaneously to obtain discriminative latent semantic topics. The visual word importance is proposed in our DJTM to guarantee that visual words with similar importance come from close latent topics while the local image manifold constraint enforces that images within the same category share similar latent topics. Finally, each image is characterized by distribution of latent semantic topics instead of low level features. Our proposed DJTM showed an excellent overall recognition accuracy 90.78%. Comprehensive comparison results demonstrate that our method outperforms existing multiple abnormalities classification methods for WCE images.
Collapse
|
34
|
Ali H, Yasmin M, Sharif M, Rehmani MH. Computer assisted gastric abnormalities detection using hybrid texture descriptors for chromoendoscopy images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 157:39-47. [PMID: 29477434 DOI: 10.1016/j.cmpb.2018.01.013] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2017] [Revised: 11/25/2017] [Accepted: 01/10/2018] [Indexed: 05/11/2023]
Abstract
BACKGROUND AND OBJECTIVE The early diagnosis of stomach cancer can be performed by using a proper screening procedure. Chromoendoscopy (CH) is an image-enhanced video endoscopy technique, which is used for inspection of the gastrointestinal-tract by spraying dyes to highlight the gastric mucosal structures. An endoscopy session can end up with generating a large number of video frames. Therefore, inspection of every individual endoscopic-frame is an exhaustive task for the medical experts. In contrast with manual inspection, the automated analysis of gastroenterology images using computer vision based techniques can provide assistance to endoscopist, by finding out abnormal frames from the whole endoscopic sequence. METHODS In this paper, we have presented a new feature extraction method named as Gabor-based gray-level co-occurrence matrix (G2LCM) for computer-aided detection of CH abnormal frames. It is a hybrid texture extraction approach which extracts a combination both local and global texture descriptors. Moreover, texture information of a CH image is represented by computing the gray level co-occurrence matrix of Gabor filters responses. Furthermore, the second-order statistics of these co-occurrence matrices are computed to represent images' texture. RESULTS The obtained results show the possibility to correctly classifying abnormal from normal frames, with sensitivity, specificity, accuracy, and area under the curve as 91%, 82%, 87% and 0.91 respectively, by using a support vector machine classifier and G2LCM texture features. CONCLUSION It is apparent from results that the proposed system can be used for providing aid to the gastroenterologist in the screening of the gastric tract. Ultimately, the time taken by an endoscopic procedure will be sufficiently reduced.
Collapse
Affiliation(s)
- Hussam Ali
- COMSATS Institute of Information Technology Wah, Pakistan.
| | | | | | - Mubashir Husain Rehmani
- Telecommunications Software and Systems Group (TSSG) Waterford Institute of Technology (WIT), Ireland
| |
Collapse
|
35
|
Formulation and statistical evaluation of an automated algorithm for locating small bowel tumours in wireless capsule endoscopy. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2018.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
36
|
K G, C R. Heuristic Classifier for Observe Accuracy of Cancer Polyp Using Video Capsule Endoscopy. Asian Pac J Cancer Prev 2017; 18:1681-1688. [PMID: 28670889 PMCID: PMC6373793 DOI: 10.22034/apjcp.2017.18.6.1681] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Methods: Colonoscopy is a technique for examine colon cancer, polyps. In endoscopy, video capsule is universally used mechanism for finding gastrointestinal stages. But both the mechanisms are used to find the colon cancer or colorectal polyp. The Automatic Polyp Detection sub-challenge conducted as part of the Endoscopic Vision Challenge (http://endovis.grand-challenge.org). Method: Colonoscopy may be primary way of improve the ability of colon cancer detection especially flat lesions. Which otherwise may be difficult to detect. Recently, automatic polyp detection algorithms have been proposed with various degrees of success. Though polyp detection in colonoscopy and other traditional endoscopy procedure based images is becoming a mature field, due to its unique imaging characteristics, detecting polyps automatically in colonoscopy is a hard problem. So the proposed video capsule cam supports to diagnose the polyps accurate and easy to identify its pattern. Existing methodology mainly concentrated on high accuracy and less time consumption and it uses many different types of data mining techniques. To analyse these high resolution video scale image we have to take segmentation of image in pixel level binary pattern with the help of a mid-pass filter and relative gray level of neighbours. This work consists of three major steps to improve the accuracy of video capsule endoscopy such as missing data imputation, high dimensionality reduction or feature selection and classification. The above steps are performed using a dataset called endoscopy polyp disease dataset with 500 patients. Our binary classification algorithm relieves human analyses using the video frames. SVM has given major contribution to process the dataset. Results: In this paper the key aspect of proposed results provide segmentation, binary pattern approach with Genetic Fuzzy based Improved Kernel Support Vector machine (GF-IKSVM) classifier. The segmented images all are mostly round shape. The result is refined via smooth filtering, computer vision methods and thresholding steps. Conclusion: Our experimental result produces 94.4% accuracy in that the proposed fuzzy system and genetic Fuzzy, which is higher than the methods, used in the literature. The GF-IKSVM classifier is well-organized and provides good accuracy results for patched VCE polyp disease diagnosis.
Collapse
Affiliation(s)
- Geetha K
- Department of Information Technology, Excel Engineering College, India.
| | | |
Collapse
|
37
|
Liu P, Guo JM, Chamnongthai K, Prasetyo H. Fusion of color histogram and LBP-based features for texture image retrieval and classification. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2017.01.025] [Citation(s) in RCA: 61] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
38
|
Chen H, Wu X, Tao G, Peng Q. Automatic content understanding with cascaded spatial–temporal deep framework for capsule endoscopy videos. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.06.077] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
39
|
|
40
|
Seguí S, Drozdzal M, Pascual G, Radeva P, Malagelada C, Azpiroz F, Vitrià J. Generic feature learning for wireless capsule endoscopy analysis. Comput Biol Med 2016; 79:163-172. [DOI: 10.1016/j.compbiomed.2016.10.011] [Citation(s) in RCA: 64] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Revised: 10/11/2016] [Accepted: 10/13/2016] [Indexed: 12/11/2022]
|
41
|
Charisis VS, Hadjileontiadis LJ. Potential of hybrid adaptive filtering in inflammatory lesion detection from capsule endoscopy images. World J Gastroenterol 2016; 22:8641-8657. [PMID: 27818583 PMCID: PMC5075542 DOI: 10.3748/wjg.v22.i39.8641] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Revised: 09/02/2016] [Accepted: 09/14/2016] [Indexed: 02/06/2023] Open
Abstract
A new feature extraction technique for the detection of lesions created from mucosal inflammations in Crohn’s disease, based on wireless capsule endoscopy (WCE) images processing is presented here. More specifically, a novel filtering process, namely Hybrid Adaptive Filtering (HAF), was developed for efficient extraction of lesion-related structural/textural characteristics from WCE images, by employing Genetic Algorithms to the Curvelet-based representation of images. Additionally, Differential Lacunarity (DLac) analysis was applied for feature extraction from the HAF-filtered images. The resulted scheme, namely HAF-DLac, incorporates support vector machines for robust lesion recognition performance. For the training and testing of HAF-DLac, an 800-image database was used, acquired from 13 patients who undertook WCE examinations, where the abnormal cases were grouped into mild and severe, according to the severity of the depicted lesion, for a more extensive evaluation of the performance. Experimental results, along with comparison with other related efforts, have shown that the HAF-DLac approach evidently outperforms them in the field of WCE image analysis for automated lesion detection, providing higher classification results, up to 93.8% (accuracy), 95.2% (sensitivity), 92.4% (specificity) and 92.6% (precision). The promising performance of HAF-DLac paves the way for a complete computer-aided diagnosis system that could support physicians’ clinical practice.
Collapse
|
42
|
Keuchel M, Kurniawan N, Baltes P, Bandorski D, Koulaouzidis A. Quantitative measurements in capsule endoscopy. Comput Biol Med 2015; 65:333-47. [PMID: 26299419 DOI: 10.1016/j.compbiomed.2015.07.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2015] [Revised: 07/16/2015] [Accepted: 07/17/2015] [Indexed: 12/14/2022]
Abstract
This review summarizes several approaches for quantitative measurement in capsule endoscopy. Video capsule endoscopy (VCE) typically provides wireless imaging of small bowel. Currently, a variety of quantitative measurements are implemented in commercially available hardware/software. The majority is proprietary and hence undisclosed algorithms. Measurement of amount of luminal contamination allows calculating scores from whole VCE studies. Other scores express the severity of small bowel lesions in Crohn׳s disease or the degree of villous atrophy in celiac disease. Image processing with numerous algorithms of textural and color feature extraction is further in the research focuses for automated image analysis. These tools aim to select single images with relevant lesions as blood, ulcers, polyps and tumors or to omit images showing only luminal contamination. Analysis of motility pattern, size measurement and determination of capsule localization are additional topics. Non-visual wireless capsules transmitting data acquired with specific sensors from the gastrointestinal (GI) tract are available for clinical routine. This includes pH measurement in the esophagus for the diagnosis of acid gastro-esophageal reflux. A wireless motility capsule provides GI motility analysis on the basis of pH, pressure, and temperature measurement. Electromagnetically tracking of another motility capsule allows visualization of motility. However, measurement of substances by GI capsules is of great interest but still at an early stage of development.
Collapse
Affiliation(s)
- M Keuchel
- Clinic for Internal Medicine, Bethesda Krankenhaus Bergedorf, Glindersweg 80, 21029 Hamburg, Germany.
| | - N Kurniawan
- Clinic for Internal Medicine, Bethesda Krankenhaus Bergedorf, Glindersweg 80, 21029 Hamburg, Germany
| | - P Baltes
- Clinic for Internal Medicine, Bethesda Krankenhaus Bergedorf, Glindersweg 80, 21029 Hamburg, Germany
| | | | | |
Collapse
|