1
|
Musha A, Hasnat R, Mamun AA, Ping EP, Ghosh T. Computer-Aided Bleeding Detection Algorithms for Capsule Endoscopy: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:7170. [PMID: 37631707 PMCID: PMC10459126 DOI: 10.3390/s23167170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 08/08/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023]
Abstract
Capsule endoscopy (CE) is a widely used medical imaging tool for the diagnosis of gastrointestinal tract abnormalities like bleeding. However, CE captures a huge number of image frames, constituting a time-consuming and tedious task for medical experts to manually inspect. To address this issue, researchers have focused on computer-aided bleeding detection systems to automatically identify bleeding in real time. This paper presents a systematic review of the available state-of-the-art computer-aided bleeding detection algorithms for capsule endoscopy. The review was carried out by searching five different repositories (Scopus, PubMed, IEEE Xplore, ACM Digital Library, and ScienceDirect) for all original publications on computer-aided bleeding detection published between 2001 and 2023. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) methodology was used to perform the review, and 147 full texts of scientific papers were reviewed. The contributions of this paper are: (I) a taxonomy for computer-aided bleeding detection algorithms for capsule endoscopy is identified; (II) the available state-of-the-art computer-aided bleeding detection algorithms, including various color spaces (RGB, HSV, etc.), feature extraction techniques, and classifiers, are discussed; and (III) the most effective algorithms for practical use are identified. Finally, the paper is concluded by providing future direction for computer-aided bleeding detection research.
Collapse
Affiliation(s)
- Ahmmad Musha
- Department of Electrical and Electronic Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh; (A.M.); (R.H.)
| | - Rehnuma Hasnat
- Department of Electrical and Electronic Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh; (A.M.); (R.H.)
| | - Abdullah Al Mamun
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia;
| | - Em Poh Ping
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia;
| | - Tonmoy Ghosh
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA;
| |
Collapse
|
2
|
Horovistiz A, Oliveira M, Araújo H. Computer vision-based solutions to overcome the limitations of wireless capsule endoscopy. J Med Eng Technol 2023; 47:242-261. [PMID: 38231042 DOI: 10.1080/03091902.2024.2302025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 12/28/2023] [Indexed: 01/18/2024]
Abstract
Endoscopic investigation plays a critical role in the diagnosis of gastrointestinal (GI) diseases. Since 2001, Wireless Capsule Endoscopy (WCE) has been available for small bowel exploration and is in continuous development. Over the last decade, WCE has achieved impressive improvements in areas such as miniaturisation, image quality and battery life. As a result, WCE is currently a very useful alternative to wired enteroscopy in the investigation of various small bowel abnormalities and has the potential to become the leading screening technique for the entire gastrointestinal tract. However, commercial solutions still have several limitations, namely incomplete examination and limited diagnostic capacity. These deficiencies are related to technical issues, such as image quality, motion estimation and power consumption management. Computational methods, based on image processing and analysis, can help to overcome these challenges and reduce both the time required by reviewers and human interpretation errors. Research groups have proposed a series of methods including algorithms for locating the capsule or lesion, assessing intestinal motility and improving image quality.In this work, we provide a critical review of computational vision-based methods for WCE image analysis aimed at overcoming the technological challenges of capsules. This article also reviews several representative public datasets used to evaluate the performance of WCE techniques and methods. Finally, some promising solutions of computational methods based on the analysis of multiple-camera endoscopic images are presented.
Collapse
Affiliation(s)
- Ana Horovistiz
- Institute of Systems and Robotics, University of Coimbra, Coimbra, Portugal
| | - Marina Oliveira
- Institute of Systems and Robotics, University of Coimbra, Coimbra, Portugal
- Department of Electrical and Computer Engineering (DEEC), Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
| | - Helder Araújo
- Institute of Systems and Robotics, University of Coimbra, Coimbra, Portugal
- Department of Electrical and Computer Engineering (DEEC), Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
| |
Collapse
|
3
|
Kim HJ, Gong EJ, Bang CS, Lee JJ, Suk KT, Baik GH. Computer-Aided Diagnosis of Gastrointestinal Protruded Lesions Using Wireless Capsule Endoscopy: A Systematic Review and Diagnostic Test Accuracy Meta-Analysis. J Pers Med 2022; 12:jpm12040644. [PMID: 35455760 PMCID: PMC9029411 DOI: 10.3390/jpm12040644] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 04/14/2022] [Accepted: 04/14/2022] [Indexed: 12/13/2022] Open
Abstract
Background: Wireless capsule endoscopy allows the identification of small intestinal protruded lesions, such as polyps, tumors, or venous structures. However, reading wireless capsule endoscopy images or movies is time-consuming, and minute lesions are easy to miss. Computer-aided diagnosis (CAD) has been applied to improve the efficacy of the reading process of wireless capsule endoscopy images or movies. However, there are no studies that systematically determine the performance of CAD models in diagnosing gastrointestinal protruded lesions. Objective: The aim of this study was to evaluate the diagnostic performance of CAD models for gastrointestinal protruded lesions using wireless capsule endoscopic images. Methods: Core databases were searched for studies based on CAD models for the diagnosis of gastrointestinal protruded lesions using wireless capsule endoscopy, and data on diagnostic performance were presented. A systematic review and diagnostic test accuracy meta-analysis were performed. Results: Twelve studies were included. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of protruded lesions were 0.95 (95% confidence interval, 0.93–0.97), 0.89 (0.84–0.92), 0.91 (0.86–0.94), and 74 (43–126), respectively. Subgroup analyses showed robust results. Meta-regression found no source of heterogeneity. Publication bias was not detected. Conclusion: CAD models showed high performance for the optical diagnosis of gastrointestinal protruded lesions based on wireless capsule endoscopy.
Collapse
Affiliation(s)
- Hye Jin Kim
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
| | - Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon 24253, Korea
- Correspondence: ; Tel.: +82-33-240-5821; Fax: +82-33-241-8064
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon 24253, Korea
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Ki Tae Suk
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| |
Collapse
|
4
|
Muruganantham P, Balakrishnan SM. Attention Aware Deep Learning Model for Wireless Capsule Endoscopy Lesion Classification and Localization. J Med Biol Eng 2022. [DOI: 10.1007/s40846-022-00686-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
5
|
Zhao PY, Han K, Yao RQ, Ren C, Du XH. Application Status and Prospects of Artificial Intelligence in Peptic Ulcers. Front Surg 2022; 9:894775. [PMID: 35784921 PMCID: PMC9244632 DOI: 10.3389/fsurg.2022.894775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 05/31/2022] [Indexed: 02/05/2023] Open
Abstract
Peptic ulcer (PU) is a common and frequently occurring disease. Although PU seriously threatens the lives and health of global residents, the applications of artificial intelligence (AI) have strongly promoted diversification and modernization in the diagnosis and treatment of PU. This minireview elaborates on the research progress of AI in the field of PU, from PU's pathogenic factor Helicobacter pylori (Hp) infection, diagnosis and differential diagnosis, to its management and complications (bleeding, obstruction, perforation and canceration). Finally, the challenges and prospects of AI application in PU are prospected and expounded. With the in-depth understanding of modern medical technology, AI remains a promising option in the management of PU patients and plays a more indispensable role. How to realize the robustness, versatility and diversity of multifunctional AI systems in PU and conduct multicenter prospective clinical research as soon as possible are the top priorities in the future.
Collapse
Affiliation(s)
- Peng-yue Zhao
- Department of General Surgery, First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Ke Han
- Department of Gastroenterology, First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Ren-qi Yao
- Translational Medicine Research Center, Medical Innovation Research Division and Fourth Medical Center of the Chinese PLA General Hospital, Beijing, China
- Correspondence: Xiao-hui Du Chao Ren Ren-qi Yao
| | - Chao Ren
- Department of Pulmonary and Critical Care Medicine, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
- Correspondence: Xiao-hui Du Chao Ren Ren-qi Yao
| | - Xiao-hui Du
- Department of General Surgery, First Medical Center of the Chinese PLA General Hospital, Beijing, China
- Correspondence: Xiao-hui Du Chao Ren Ren-qi Yao
| |
Collapse
|
6
|
Bang CS, Lee JJ, Baik GH. Computer-Aided Diagnosis of Gastrointestinal Ulcer and Hemorrhage Using Wireless Capsule Endoscopy: Systematic Review and Diagnostic Test Accuracy Meta-analysis. J Med Internet Res 2021; 23:e33267. [PMID: 34904949 PMCID: PMC8715364 DOI: 10.2196/33267] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 10/10/2021] [Accepted: 10/13/2021] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Interpretation of capsule endoscopy images or movies is operator-dependent and time-consuming. As a result, computer-aided diagnosis (CAD) has been applied to enhance the efficacy and accuracy of the review process. Two previous meta-analyses reported the diagnostic performance of CAD models for gastrointestinal ulcers or hemorrhage in capsule endoscopy. However, insufficient systematic reviews have been conducted, which cannot determine the real diagnostic validity of CAD models. OBJECTIVE To evaluate the diagnostic test accuracy of CAD models for gastrointestinal ulcers or hemorrhage using wireless capsule endoscopic images. METHODS We conducted core databases searching for studies based on CAD models for the diagnosis of ulcers or hemorrhage using capsule endoscopy and presenting data on diagnostic performance. Systematic review and diagnostic test accuracy meta-analysis were performed. RESULTS Overall, 39 studies were included. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of ulcers (or erosions) were .97 (95% confidence interval, .95-.98), .93 (.89-.95), .92 (.89-.94), and 138 (79-243), respectively. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of hemorrhage (or angioectasia) were .99 (.98-.99), .96 (.94-0.97), .97 (.95-.99), and 888 (343-2303), respectively. Subgroup analyses showed robust results. Meta-regression showed that published year, number of training images, and target disease (ulcers vs erosions, hemorrhage vs angioectasia) was found to be the source of heterogeneity. No publication bias was detected. CONCLUSIONS CAD models showed high performance for the optical diagnosis of gastrointestinal ulcer and hemorrhage in wireless capsule endoscopy.
Collapse
Affiliation(s)
- Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea.,Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Republic of Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Republic of Korea.,Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea
| |
Collapse
|
7
|
Guo X, Li S, Zhang L, Wang Y, Zhang L, Liu Z, Guo J, Du Y. A novel Joint-Net model for recognizing small-bowel polyp images. MINIM INVASIV THER 2021; 31:712-719. [PMID: 34730070 DOI: 10.1080/13645706.2021.1980402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
INTRODUCTION To automatically recognize polyps of enteroscopy images and avoid pathological change, a novel Joint-Net has been proposed. MATERIAL AND METHODS The left half of the Joint-Net is constructed by transfer learning VGG16 and its right half is deepened based on the U-Net. In the previous two skip connections, a 3 × 3 convolution layer is added and the original two convolutions are replaced by the identity blocks. To connect the left and the right half part, the asymmetric convolution layer is used. In the output, the loophole-like structure is used. RESULTS The enteroscopy images were obtained in Changhai Hospital of Shanghai. The mean values of Dice and intersection over union were 90.05% and 82.71%. The classification accuracy of normal images and polyp images was 93.50%. CONCLUSIONS The experiments show that the Joint-Net can segment and recognize the polyps successfully.
Collapse
Affiliation(s)
- Xudong Guo
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Shengnan Li
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Linqi Zhang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Yuxin Wang
- Department of Gastroenterology, Changhai Hospital, Naval Medical University, Shanghai, China
| | - Lulu Zhang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Zhang Liu
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Jiefang Guo
- Department of Gastroenterology, Changhai Hospital, Naval Medical University, Shanghai, China
| | - Yiqi Du
- Department of Gastroenterology, Changhai Hospital, Naval Medical University, Shanghai, China
| |
Collapse
|
8
|
Artificial Intelligence in Capsule Endoscopy: A Practical Guide to Its Past and Future Challenges. Diagnostics (Basel) 2021; 11:diagnostics11091722. [PMID: 34574063 PMCID: PMC8469774 DOI: 10.3390/diagnostics11091722] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 12/20/2022] Open
Abstract
Artificial intelligence (AI) has revolutionized the medical diagnostic process of various diseases. Since the manual reading of capsule endoscopy videos is a time-intensive, error-prone process, computerized algorithms have been introduced to automate this process. Over the past decade, the evolution of convolutional neural network (CNN) enabled AI to detect multiple lesions simultaneously with increasing accuracy and sensitivity. Difficulty in validating CNN performance and unique characteristics of capsule endoscopy images make computer-aided reading systems in capsule endoscopy still on a preclinical level. Although AI technology can be used as an auxiliary second observer in capsule endoscopy, it is expected that in the near future, it will effectively reduce the reading time and ultimately become an independent, integrated reading system.
Collapse
|
9
|
Guo X, Zhang L, Hao Y, Zhang L, Liu Z, Liu J. Multiple abnormality classification in wireless capsule endoscopy images based on EfficientNet using attention mechanism. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2021; 92:094102. [PMID: 34598534 DOI: 10.1063/5.0054161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 08/13/2021] [Indexed: 06/13/2023]
Abstract
The wireless capsule endoscopy (WCE) procedure produces tens of thousands of images of the digestive tract, for which the use of the manual reading process is full of challenges. Convolutional neural networks are used to automatically detect lesions in WCE images. However, studies on clinical multilesion detection are scarce, and it is difficult to effectively balance the sensitivity to multiple lesions. A strategy for detecting multiple lesions is proposed, wherein common vascular and inflammatory lesions can be automatically and quickly detected on capsule endoscopic images. Based on weakly supervised learning, EfficientNet is fine-tuned to extract the endoscopic image features. Combining spatial features and channel features, the proposed attention network is then used as a classifier to obtain three classifications. The accuracy and speed of the model were compared with those of the ResNet121 and InceptionNetV4 models. It was tested on a public WCE image dataset obtained from 4143 subjects. On the computer-assisted diagnosis for capsule endoscopy database, the method gives a sensitivity of 96.67% for vascular lesions and 93.33% for inflammatory lesions. The precision for vascular lesions was 92.80%, and that for inflammatory lesions was 95.73%. The accuracy was 96.11%, which is 1.11% higher than that of the latest InceptionNetV4 network. Prediction for an image only requires 14 ms, which balances the accuracy and speed comparatively better. This strategy can be used as an auxiliary diagnostic method for specialists for the rapid reading of clinical capsule endoscopes.
Collapse
Affiliation(s)
- Xudong Guo
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Lulu Zhang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Youguo Hao
- Department of Rehabilitation, Shanghai Putuo People's Hospital, Shanghai 200060, China
| | - Linqi Zhang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Zhang Liu
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Jiannan Liu
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| |
Collapse
|
10
|
Yang Y, Li YX, Yao RQ, Du XH, Ren C. Artificial intelligence in small intestinal diseases: Application and prospects. World J Gastroenterol 2021; 27:3734-3747. [PMID: 34321840 PMCID: PMC8291013 DOI: 10.3748/wjg.v27.i25.3734] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 04/09/2021] [Accepted: 05/08/2021] [Indexed: 02/06/2023] Open
Abstract
The small intestine is located in the middle of the gastrointestinal tract, so small intestinal diseases are more difficult to diagnose than other gastrointestinal diseases. However, with the extensive application of artificial intelligence in the field of small intestinal diseases, with its efficient learning capacities and computational power, artificial intelligence plays an important role in the auxiliary diagnosis and prognosis prediction based on the capsule endoscopy and other examination methods, which improves the accuracy of diagnosis and prediction and reduces the workload of doctors. In this review, a comprehensive retrieval was performed on articles published up to October 2020 from PubMed and other databases. Thereby the application status of artificial intelligence in small intestinal diseases was systematically introduced, and the challenges and prospects in this field were also analyzed.
Collapse
Affiliation(s)
- Yu Yang
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Yu-Xuan Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ren-Qi Yao
- Trauma Research Center, The Fourth Medical Center and Medical Innovation Research Division of the Chinese People‘s Liberation Army General Hospital, Beijing 100048, China
- Department of Burn Surgery, Changhai Hospital, Naval Medical University, Shanghai 200433, China
| | - Xiao-Hui Du
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Chao Ren
- Trauma Research Center, The Fourth Medical Center and Medical Innovation Research Division of the Chinese People‘s Liberation Army General Hospital, Beijing 100048, China
| |
Collapse
|
11
|
Lan L, Ye C. Recurrent generative adversarial networks for unsupervised WCE video summarization. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.106971] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
12
|
Smedsrud PH, Thambawita V, Hicks SA, Gjestang H, Nedrejord OO, Næss E, Borgli H, Jha D, Berstad TJD, Eskeland SL, Lux M, Espeland H, Petlund A, Nguyen DTD, Garcia-Ceja E, Johansen D, Schmidt PT, Toth E, Hammer HL, de Lange T, Riegler MA, Halvorsen P. Kvasir-Capsule, a video capsule endoscopy dataset. Sci Data 2021; 8:142. [PMID: 34045470 PMCID: PMC8160146 DOI: 10.1038/s41597-021-00920-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 04/15/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is predicted to have profound effects on the future of video capsule endoscopy (VCE) technology. The potential lies in improving anomaly detection while reducing manual labour. Existing work demonstrates the promising benefits of AI-based computer-assisted diagnosis systems for VCE. They also show great potential for improvements to achieve even better results. Also, medical data is often sparse and unavailable to the research community, and qualified medical personnel rarely have time for the tedious labelling work. We present Kvasir-Capsule, a large VCE dataset collected from examinations at a Norwegian Hospital. Kvasir-Capsule consists of 117 videos which can be used to extract a total of 4,741,504 image frames. We have labelled and medically verified 47,238 frames with a bounding box around findings from 14 different classes. In addition to these labelled images, there are 4,694,266 unlabelled frames included in the dataset. The Kvasir-Capsule dataset can play a valuable role in developing better algorithms in order to reach true potential of VCE technology.
Collapse
Affiliation(s)
- Pia H Smedsrud
- SimulaMet, Oslo, Norway.
- University of Oslo, Oslo, Norway.
- Augere Medical AS, Oslo, Norway.
| | | | - Steven A Hicks
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| | | | | | - Espen Næss
- SimulaMet, Oslo, Norway
- University of Oslo, Oslo, Norway
| | - Hanna Borgli
- SimulaMet, Oslo, Norway
- University of Oslo, Oslo, Norway
| | - Debesh Jha
- SimulaMet, Oslo, Norway
- UIT The Arctic University of Norway, Tromsø, Norway
| | | | | | | | | | | | | | | | - Dag Johansen
- UIT The Arctic University of Norway, Tromsø, Norway
| | - Peter T Schmidt
- Karolinska Institutet, Department of Medicine, Solna, Sweden
- Ersta Hospital, Department of Medicine, Stockholm, Sweden
| | - Ervin Toth
- Department of Gastroenterology, Skåne University Hospital, Malmö Lund University, Malmö, Sweden
| | - Hugo L Hammer
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| | - Thomas de Lange
- Department of Medical Research, Bærum Hospital, Gjettum, Norway
- Augere Medical AS, Oslo, Norway
- Medical Department, Sahlgrenska University Hospital-Mölndal Hospital, Göteborg, Sweden
- Department of Molecular and Clinical Medicine, Sahlgrenska Academy, University of Gothenburg, Göteborg, Sweden
| | | | - Pål Halvorsen
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| |
Collapse
|
13
|
Naz J, Sharif M, Yasmin M, Raza M, Khan MA. Detection and Classification of Gastrointestinal Diseases using Machine Learning. Curr Med Imaging 2021; 17:479-490. [PMID: 32988355 DOI: 10.2174/1573405616666200928144626] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Traditional endoscopy is an invasive and painful method of examining the gastrointestinal tract (GIT) not supported by physicians and patients. To handle this issue, video endoscopy (VE) or wireless capsule endoscopy (WCE) is recommended and utilized for GIT examination. Furthermore, manual assessment of captured images is not possible for an expert physician because it's a time taking task to analyze thousands of images thoroughly. Hence, there comes the need for a Computer-Aided-Diagnosis (CAD) method to help doctors analyze images. Many researchers have proposed techniques for automated recognition and classification of abnormality in captured images. METHODS In this article, existing methods for automated classification, segmentation and detection of several GI diseases are discussed. Paper gives a comprehensive detail about these state-of-theart methods. Furthermore, literature is divided into several subsections based on preprocessing techniques, segmentation techniques, handcrafted features based techniques and deep learning based techniques. Finally, issues, challenges and limitations are also undertaken. RESULTS A comparative analysis of different approaches for the detection and classification of GI infections. CONCLUSION This comprehensive review article combines information related to a number of GI diseases diagnosis methods at one place. This article will facilitate the researchers to develop new algorithms and approaches for early detection of GI diseases detection with more promising results as compared to the existing ones of literature.
Collapse
Affiliation(s)
- Javeria Naz
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | | |
Collapse
|
14
|
Attallah O, Sharkas M. GASTRO-CADx: a three stages framework for diagnosing gastrointestinal diseases. PeerJ Comput Sci 2021; 7:e423. [PMID: 33817058 PMCID: PMC7959662 DOI: 10.7717/peerj-cs.423] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/11/2021] [Indexed: 05/04/2023]
Abstract
Gastrointestinal (GI) diseases are common illnesses that affect the GI tract. Diagnosing these GI diseases is quite expensive, complicated, and challenging. A computer-aided diagnosis (CADx) system based on deep learning (DL) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis. Therefore, this article proposes a CADx system called Gastro-CADx to classify several GI diseases using DL techniques. Gastro-CADx involves three progressive stages. Initially, four different CNNs are used as feature extractors to extract spatial features. Most of the related work based on DL approaches extracted spatial features only. However, in the following phase of Gastro-CADx, features extracted in the first stage are applied to the discrete wavelet transform (DWT) and the discrete cosine transform (DCT). DCT and DWT are used to extract temporal-frequency and spatial-frequency features. Additionally, a feature reduction procedure is performed in this stage. Finally, in the third stage of the Gastro-CADx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the CADx and select the best-fused feature set. Two datasets referred to as Dataset I and II are utilized to evaluate the performance of Gastro-CADx. Results indicated that Gastro-CADx has achieved an accuracy of 97.3% and 99.7% for Dataset I and II respectively. The results were compared with recent related works. The comparison showed that the proposed approach is capable of classifying GI diseases with higher accuracy compared to other work. Thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. It can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| | - Maha Sharkas
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
15
|
Babu C, Chandy DA. A Review on Lossless Compression Techniques for Wireless Capsule Endoscopic Data. Curr Med Imaging 2021; 17:27-38. [PMID: 32324517 DOI: 10.2174/1573405616666200423084725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 02/07/2020] [Accepted: 02/27/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND The videos produced during wireless capsule endoscopy have larger data size causing difficulty in transmission with limited bandwidth. The constraint on wireless capsule endoscopy hinders the performance of the compression module. OBJECTIVES The objectives of this paper are as follows: (i) to conduct an extensive review of the lossless compression techniques and (ii) to find out the limitations of the existing system and the possibilities for improvement. METHODS The literature review was conducted with a focus on the compression schemes satisfying minimum computational complexity, less power dissipation and low memory requirements for hardware implementation. A thorough study of various lossless compression techniques was conducted under two perspectives, i.e., techniques applied to Bayer CFA and RGB images. The detail of the various stages of wireless capsule endoscopy compression was investigated to have a better understanding. The suitable performance metrics for evaluating the compression techniques were listed from various literature studies. RESULTS In addition to the Gastrolab database, WEO clinical endoscopy atlas and Gastrointestinal atlas were found to be better alternatives for experimentation. Pre-processing operations, especially new subsampling patterns need to be given more focus to exploit the redundancies in the images. Investigations showed that encoder module can be modified to bring more improvement towards compression. The real-time endoscopy still exists as a promising area for exploration. CONCLUSION This review presents a research update on the details of wireless capsule endoscopy compression together with the findings as an eye-opener and guidance for further research.
Collapse
Affiliation(s)
- Caren Babu
- Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - D Abraham Chandy
- Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| |
Collapse
|
16
|
Wang S, Cong Y, Zhu H, Chen X, Qu L, Fan H, Zhang Q, Liu M. Multi-Scale Context-Guided Deep Network for Automated Lesion Segmentation With Endoscopy Images of Gastrointestinal Tract. IEEE J Biomed Health Inform 2021; 25:514-525. [PMID: 32750912 DOI: 10.1109/jbhi.2020.2997760] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Accurate lesion segmentation based on endoscopy images is a fundamental task for the automated diagnosis of gastrointestinal tract (GI Tract) diseases. Previous studies usually use hand-crafted features for representing endoscopy images, while feature definition and lesion segmentation are treated as two standalone tasks. Due to the possible heterogeneity between features and segmentation models, these methods often result in sub-optimal performance. Several fully convolutional networks have been recently developed to jointly perform feature learning and model training for GI Tract disease diagnosis. However, they generally ignore local spatial details of endoscopy images, as down-sampling operations (e.g., pooling and convolutional striding) may result in irreversible loss of image spatial information. To this end, we propose a multi-scale context-guided deep network (MCNet) for end-to-end lesion segmentation of endoscopy images in GI Tract, where both global and local contexts are captured as guidance for model training. Specifically, one global subnetwork is designed to extract the global structure and high-level semantic context of each input image. Then we further design two cascaded local subnetworks based on output feature maps of the global subnetwork, aiming to capture both local appearance information and relatively high-level semantic information in a multi-scale manner. Those feature maps learned by three subnetworks are further fused for the subsequent task of lesion segmentation. We have evaluated the proposed MCNet on 1,310 endoscopy images from the public EndoVis-Ab and CVC-ClinicDB datasets for abnormal segmentation and polyp segmentation, respectively. Experimental results demonstrate that MCNet achieves [Formula: see text] and [Formula: see text] mean intersection over union (mIoU) on two datasets, respectively, outperforming several state-of-the-art approaches in automated lesion segmentation with endoscopy images of GI Tract.
Collapse
|
17
|
Liaqat A, Khan MA, Sharif M, Mittal M, Saba T, Manic KS, Al Attar FNH. Gastric Tract Infections Detection and Classification from Wireless Capsule Endoscopy using Computer Vision Techniques: A Review. Curr Med Imaging 2021; 16:1229-1242. [PMID: 32334504 DOI: 10.2174/1573405616666200425220513] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 01/14/2020] [Accepted: 01/30/2020] [Indexed: 11/22/2022]
Abstract
Recent facts and figures published in various studies in the US show that approximately
27,510 new cases of gastric infections are diagnosed. Furthermore, it has also been reported that
the mortality rate is quite high in diagnosed cases. The early detection of these infections can save
precious human lives. As the manual process of these infections is time-consuming and expensive,
therefore automated Computer-Aided Diagnosis (CAD) systems are required which helps the endoscopy
specialists in their clinics. Generally, an automated method of gastric infection detections
using Wireless Capsule Endoscopy (WCE) is comprised of the following steps such as contrast preprocessing,
feature extraction, segmentation of infected regions, and classification into their relevant
categories. These steps consist of various challenges that reduce the detection and recognition
accuracy as well as increase the computation time. In this review, authors have focused on the importance
of WCE in medical imaging, the role of endoscopy for bleeding-related infections, and
the scope of endoscopy. Further, the general steps and highlighting the importance of each step
have been presented. A detailed discussion and future directions have been provided at the end.
Collapse
Affiliation(s)
- Amna Liaqat
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | | | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Mamta Mittal
- Department of Computer Science & Engineering, G.B. Pant Govt. Engineering College, New Delhi, India
| | - Tanzila Saba
- Department of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia
| | - K. Suresh Manic
- Department of Electrical & Computer Engineering, National University of Science & Technology, Muscat, Oman
| | | |
Collapse
|
18
|
Barash Y, Azaria L, Soffer S, Margalit Yehuda R, Shlomi O, Ben-Horin S, Eliakim R, Klang E, Kopylov U. Ulcer severity grading in video capsule images of patients with Crohn's disease: an ordinal neural network solution. Gastrointest Endosc 2021; 93:187-192. [PMID: 32535191 DOI: 10.1016/j.gie.2020.05.066] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 05/26/2020] [Indexed: 02/07/2023]
Abstract
BACKGROUND AND AIMS Capsule endoscopy (CE) is an important modality for diagnosis and follow-up of Crohn's disease (CD). The severity of ulcers at endoscopy is significant for predicting the course of CD. Deep learning has been proven accurate in detecting ulcers on CE. However, endoscopic classification of ulcers by deep learning has not been attempted. The aim of our study was to develop a deep learning algorithm for automated grading of CD ulcers on CE. METHODS We retrospectively collected CE images of CD ulcers from our CE database. In experiment 1, the severity of each ulcer was graded by 2 capsule readers based on the PillCam CD classification (grades 1-3 from mild to severe), and the inter-reader variability was evaluated. In experiment 2, a consensus reading by 3 capsule readers was used to train an ordinal convolutional neural network (CNN) to automatically grade images of ulcers, and the resulting algorithm was tested against the consensus reading. A pretraining stage included training the network on images of normal mucosa and ulcerated mucosa. RESULTS Overall, our dataset included 17,640 CE images from 49 patients; 7391 images with mucosal ulcers and 10,249 normal images. A total of 2598 randomly selected pathologic images were further graded from 1 to 3 according to ulcer severity in the 2 different experiments. In experiment 1, overall inter-reader agreement occurred for 31% of the images (345 of 1108) and 76% (752 of 989) for distinction of grades 1 and 3. In experiment 2, the algorithm was trained on 1242 images. It achieved an overall agreement for consensus reading of 67% (166 of 248) and 91% (158 of 173) for distinction of grades 1 and 3. The classification accuracy of the algorithm was 0.91 (95% confidence interval, 0.867-0.954) for grade 1 versus grade 3 ulcers, 0.78 (95% confidence interval, 0.716-0.844) for grade 2 versus grade 3, and 0.624 (95% confidence interval, 0.547-0.701) for grade 1 versus grade 2. CONCLUSIONS CNN achieved high accuracy in detecting severe CD ulcerations. CNN-assisted CE readings in patients with CD can potentially facilitate and improve diagnosis and monitoring in these patients.
Collapse
Affiliation(s)
- Yiftach Barash
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Liran Azaria
- DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Shelly Soffer
- DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Reuma Margalit Yehuda
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Oranit Shlomi
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Shomron Ben-Horin
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Rami Eliakim
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Eyal Klang
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Uri Kopylov
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| |
Collapse
|
19
|
Owais M, Arsalan M, Mahmood T, Kang JK, Park KR. Automated Diagnosis of Various Gastrointestinal Lesions Using a Deep Learning-Based Classification and Retrieval Framework With a Large Endoscopic Database: Model Development and Validation. J Med Internet Res 2020; 22:e18563. [PMID: 33242010 PMCID: PMC7728528 DOI: 10.2196/18563] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 09/16/2020] [Accepted: 11/11/2020] [Indexed: 12/14/2022] Open
Abstract
Background The early diagnosis of various gastrointestinal diseases can lead to effective treatment and reduce the risk of many life-threatening conditions. Unfortunately, various small gastrointestinal lesions are undetectable during early-stage examination by medical experts. In previous studies, various deep learning–based computer-aided diagnosis tools have been used to make a significant contribution to the effective diagnosis and treatment of gastrointestinal diseases. However, most of these methods were designed to detect a limited number of gastrointestinal diseases, such as polyps, tumors, or cancers, in a specific part of the human gastrointestinal tract. Objective This study aimed to develop a comprehensive computer-aided diagnosis tool to assist medical experts in diagnosing various types of gastrointestinal diseases. Methods Our proposed framework comprises a deep learning–based classification network followed by a retrieval method. In the first step, the classification network predicts the disease type for the current medical condition. Then, the retrieval part of the framework shows the relevant cases (endoscopic images) from the previous database. These past cases help the medical expert validate the current computer prediction subjectively, which ultimately results in better diagnosis and treatment. Results All the experiments were performed using 2 endoscopic data sets with a total of 52,471 frames and 37 different classes. The optimal performances obtained by our proposed method in accuracy, F1 score, mean average precision, and mean average recall were 96.19%, 96.99%, 98.18%, and 95.86%, respectively. The overall performance of our proposed diagnostic framework substantially outperformed state-of-the-art methods. Conclusions This study provides a comprehensive computer-aided diagnosis framework for identifying various types of gastrointestinal diseases. The results show the superiority of our proposed method over various other recent methods and illustrate its potential for clinical diagnosis and treatment. Our proposed network can be applicable to other classification domains in medical imaging, such as computed tomography scans, magnetic resonance imaging, and ultrasound sequences.
Collapse
Affiliation(s)
- Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Tahir Mahmood
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Jin Kyu Kang
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| |
Collapse
|
20
|
Jani KK, Srivastava R. A Survey on Medical Image Analysis in Capsule Endoscopy. Curr Med Imaging 2020; 15:622-636. [PMID: 32008510 DOI: 10.2174/1573405614666181102152434] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2018] [Revised: 10/14/2018] [Accepted: 10/22/2018] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Capsule Endoscopy (CE) is a non-invasive, patient-friendly alternative to conventional endoscopy procedure. However, CE produces 6 to 8 hrs long video posing a tedious challenge to a gastroenterologist for abnormality detection. Major challenges to an expert are lengthy videos, need of constant concentration and subjectivity of the abnormality. To address these challenges along with high diagnostic accuracy, design and development of automated abnormality detection system is a must. Machine learning and computer vision techniques are devised to develop such automated systems. METHODS Study presents a review of quality research papers published in IEEE, Scopus, and Science Direct database with search criteria as capsule endoscopy, engineering, and journal papers. The initial search retrieved 144 publications. After evaluating all articles, 62 publications pertaining to image analysis are selected. RESULTS This paper presents a rigorous review comprising all the aspects of medical image analysis concerning capsule endoscopy namely video summarization and redundant image elimination, Image enhancement and interpretation, segmentation and region identification, Computer-aided abnormality detection in capsule endoscopy, Image and video compression. The study provides a comparative analysis of various approaches, experimental setup, performance, strengths, and limitations of the aspects stated above. CONCLUSIONS The analyzed image analysis techniques for capsule endoscopy have not yet overcome all current challenges mainly due to lack of dataset and complex nature of the gastrointestinal tract.
Collapse
Affiliation(s)
- Kuntesh Ketan Jani
- Computer Science and Engineering Department, Indian Institute of Technology (Banaras Hindu University) Varanasi, Varanasi, Uttar Pradesh, India
| | - Rajeev Srivastava
- Computer Science and Engineering Department, Indian Institute of Technology (Banaras Hindu University) Varanasi, Varanasi, Uttar Pradesh, India
| |
Collapse
|
21
|
Deep learning for wireless capsule endoscopy: a systematic review and meta-analysis. Gastrointest Endosc 2020; 92:831-839.e8. [PMID: 32334015 DOI: 10.1016/j.gie.2020.04.039] [Citation(s) in RCA: 95] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Accepted: 04/13/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS Deep learning is an innovative algorithm based on neural networks. Wireless capsule endoscopy (WCE) is considered the criterion standard for detecting small-bowel diseases. Manual examination of WCE is time-consuming and can benefit from automatic detection using artificial intelligence (AI). We aimed to perform a systematic review of the current literature pertaining to deep learning implementation in WCE. METHODS We conducted a search in PubMed for all original publications on the subject of deep learning applications in WCE published between January 1, 2016 and December 15, 2019. Evaluation of the risk of bias was performed using tailored Quality Assessment of Diagnostic Accuracy Studies-2. Pooled sensitivity and specificity were calculated. Summary receiver operating characteristic curves were plotted. RESULTS Of the 45 studies retrieved, 19 studies were included. All studies were retrospective. Deep learning applications for WCE included detection of ulcers, polyps, celiac disease, bleeding, and hookworm. Detection accuracy was above 90% for most studies and diseases. Pooled sensitivity and specificity for ulcer detection were .95 (95% confidence interval [CI], .89-.98) and .94 (95% CI, .90-.96), respectively. Pooled sensitivity and specificity for bleeding or bleeding source were .98 (95% CI, .96-.99) and .99 (95% CI, .97-.99), respectively. CONCLUSIONS Deep learning has achieved excellent performance for the detection of a range of diseases in WCE. Notwithstanding, current research is based on retrospective studies with a high risk of bias. Thus, future prospective, multicenter studies are necessary for this technology to be implemented in the clinical use of WCE.
Collapse
|
22
|
Rahim T, Usman MA, Shin SY. A survey on contemporary computer-aided tumor, polyp, and ulcer detection methods in wireless capsule endoscopy imaging. Comput Med Imaging Graph 2020; 85:101767. [DOI: 10.1016/j.compmedimag.2020.101767] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 07/13/2020] [Accepted: 07/18/2020] [Indexed: 12/12/2022]
|
23
|
Chuquimia O, Pinna A, Dray X, Granado B. A Low Power and Real-Time Architecture for Hough Transform Processing Integration in a Full HD-Wireless Capsule Endoscopy. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2020; 14:646-657. [PMID: 32746352 DOI: 10.1109/tbcas.2020.3008458] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose a new paradigm of a smart wireless endoscopic capsule (WCE) that has the ability to select suspicious images containing a polyp before sending them outside the body. To do so, we have designed an image processing system to select images with Regions Of Interest (ROI) containing a polyp. The criterion used to select an ROI is based on the polyp's shape. We use the Hough Transform (HT), a widely used shape-based algorithm for object detection and localization, to make this selection. In this paper, we present a new algorithm to compute in real-time the Hough Transform of high definition images (1920 x 1080 pixels). This algorithm has been designed to be integrated inside a WCE where there are specific constraints: a limited area and a limited amount of energy. To validate our algorithm, we have realized tests using a dataset containing synthetic images, real images, and endoscopic images with polyps. Results have shown that our algorithm is capable to detect circular shapes in synthetic and real images, but also can detect circles with an irregular contour, like that of polyps. We have implemented our architecture and validated it in a Xilinx Spartan 7 FPGA device, with an area of [Formula: see text], which is compatible with integration inside a WCE. This architecture runs at 132 MHz with an estimated power consumption of 76 mW and can work close to 10 hours. To improve the capacity of our architecture, we have also made an ASIC estimation, that let our architecture work at 125 MHz, with a power consumption of only 17.2 mW and a duration of approximately 50 hours.
Collapse
|
24
|
Ashour AS, Dey N, Mohamed WS, Tromp JG, Sherratt RS, Shi F, Moraru L. Colored Video Analysis in Wireless Capsule Endoscopy: A Survey of State-of-the-Art. Curr Med Imaging 2020; 16:1074-1084. [PMID: 32107996 DOI: 10.2174/1573405616666200124140915] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 11/28/2019] [Accepted: 12/23/2019] [Indexed: 12/15/2022]
Abstract
Wireless Capsule Endoscopy (WCE) is a highly promising technology for gastrointestinal (GI) tract abnormality diagnosis. However, low image resolution and low frame rates are challenging issues in WCE. In addition, the relevant frames containing the features of interest for accurate diagnosis only constitute 1% of the complete video information. For these reasons, analyzing the WCE videos is still a time consuming and laborious examination for the gastroenterologists, which reduces WCE system usability. This leads to the emergent need to speed-up and automates the WCE video process for GI tract examinations. Consequently, the present work introduced the concept of WCE technology, including the structure of WCE systems, with a focus on the medical endoscopy video capturing process using image sensors. It discussed also the significant characteristics of the different GI tract for effective feature extraction. Furthermore, video approaches for bleeding and lesion detection in the WCE video were reported with computer-aided diagnosis systems in different applications to support the gastroenterologist in the WCE video analysis. In image enhancement, WCE video review time reduction is also discussed, while reporting the challenges and future perspectives, including the new trend to employ the deep learning models for feature Learning, polyp recognition, and classification, as a new opportunity for researchers to develop future WCE video analysis techniques.
Collapse
Affiliation(s)
- Amira S Ashour
- Department of Electronics and Electrical Communications Engineering, Faculty of Engineering, Tanta University, Tanta, 31527, Egypt
| | - Nilanjan Dey
- Department of Information Technology, Techno India College of Technology, West Bengal, 740000, India
| | - Waleed S Mohamed
- Department of Internal Medicine, Faculty of Medicine, Tanta University, Tanta, 31527, Egypt
| | - Jolanda G Tromp
- Computer Science Department, Center for Visualization and Simulation, Duy Tan University, Da Nang, Vietnam
| | - R Simon Sherratt
- Department of Biomedical Engineering, University of Reading, Reading, Berkshire, United Kingdom
| | - Fuqian Shi
- Rutgers Cancer Institute of New Jersey, Rutgers University, New Brunswick, New Jersey, 08903, Egypt
| | - Luminița Moraru
- Faculty of Sciences and Environment, Dunarea de Jos University of Galati, Galati, Romania
| |
Collapse
|
25
|
Abstract
Artificial intelligence (AI), a discipline encompassed by data science, has seen recent rapid growth in its application to healthcare and beyond, and is now an integral part of daily life. Uses of AI in gastroenterology include the automated detection of disease and differentiation of pathology subtypes and disease severity. Although a majority of AI research in gastroenterology focuses on adult applications, there are a number of pediatric pathologies that could benefit from more research. As new and improved diagnostic tools become available and more information is retrieved from them, AI could provide physicians a method to distill enormous amounts of data into enhanced decision-making and cost saving for children with digestive disorders. This review provides a broad overview of AI and examples of its possible applications in pediatric gastroenterology.
Collapse
|
26
|
Deeba F, Bui FM, Wahid KA. Computer-aided polyp detection based on image enhancement and saliency-based selection. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.04.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
27
|
Wang S, Xing Y, Zhang L, Gao H, Zhang H. A systematic evaluation and optimization of automatic detection of ulcers in wireless capsule endoscopy on a large dataset using deep convolutional neural networks. Phys Med Biol 2019; 64:235014. [PMID: 31645019 DOI: 10.1088/1361-6560/ab5086] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Compared with conventional gastroscopy which is invasive and painful, wireless capsule endoscopy (WCE) can provide noninvasive examination of gastrointestinal (GI) tract. The WCE video can effectively support physicians to reach a diagnostic decision while a huge number of images need to be analyzed (more than 50 000 frames per patient). In this paper, we propose a computer-aided diagnosis method called second glance (secG) detection framework for automatic detection of ulcers based on deep convolutional neural networks that provides both classification confidence and bounding box of lesion area. We evaluated its performance on a large dataset that consists of 1504 patient cases (the largest WCE ulcer dataset to our best knowledge, 1076 cases with ulcers, 428 normal cases). We use 15 781 ulcer frames from 753 ulcer cases and 17 138 normal frames from 300 normal cases for training. Validation dataset consists of 2040 ulcer frames from 108 cases and 2319 frames from 43 normal cases. For test, we use 4917 ulcer frames from 215 ulcer cases and 5007 frames from 85 normal cases. Test results demonstrate the 0.9469 ROC-AUC of the proposed secG detection framework outperforms state-of-the-art detection frameworks including Faster-RCNN (0.9014) and SSD-300 (0.8355), which implies the effectiveness of our method. From the ulcer size analysis, we find the detection of ulcers is highly related to the size. For ulcers with size larger than 1% of the full image size, the sensitivity exceeds 92.00%. For ulcers that are smaller than 1% of the full image size, the sensitivity is around 85.00%. The overall sensitivity, specificity and accuracy are 89.71%, 90.48% and 90.10%, at a threshold value of 0.6706, which implies the potential of the proposed method to suppress oversights and to reduce the burden of physicians.
Collapse
Affiliation(s)
- Sen Wang
- Key Laboratory of Particle and Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, People's Republic of China. Department of Engineering Physics, Tsinghua University, Beijing 100084, People's Republic of China
| | | | | | | | | |
Collapse
|
28
|
Deep Convolutional Neural Network for Ulcer Recognition in Wireless Capsule Endoscopy: Experimental Feasibility and Optimization. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2019; 2019:7546215. [PMID: 31641370 PMCID: PMC6766681 DOI: 10.1155/2019/7546215] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Accepted: 08/18/2019] [Indexed: 01/17/2023]
Abstract
Wireless capsule endoscopy (WCE) has developed rapidly over the last several years and now enables physicians to examine the gastrointestinal tract without surgical operation. However, a large number of images must be analyzed to obtain a diagnosis. Deep convolutional neural networks (CNNs) have demonstrated impressive performance in different computer vision tasks. Thus, in this work, we aim to explore the feasibility of deep learning for ulcer recognition and optimize a CNN-based ulcer recognition architecture for WCE images. By analyzing the ulcer recognition task and characteristics of classic deep learning networks, we propose a HAnet architecture that uses ResNet-34 as the base network and fuses hyper features from the shallow layer with deep features in deeper layers to provide final diagnostic decisions. 1,416 independent WCE videos are collected for this study. The overall test accuracy of our HAnet is 92.05%, and its sensitivity and specificity are 91.64% and 92.42%, respectively. According to our comparisons of F1, F2, and ROC-AUC, the proposed method performs better than several off-the-shelf CNN models, including VGG, DenseNet, and Inception-ResNet-v2, and classical machine learning methods with handcrafted features for WCE image classification. Overall, this study demonstrates that recognizing ulcers in WCE images via the deep CNN method is feasible and could help reduce the tedious image reading work of physicians. Moreover, our HAnet architecture tailored for this problem gives a fine choice for the design of network structure.
Collapse
|
29
|
Affiliation(s)
- Jihong Min
- Andrew and Peggy Cherng Department of Medical EngineeringDivision of Engineering and Applied ScienceCalifornia Institute of Technology Pasadena CA 91125 USA
| | - Yiran Yang
- Andrew and Peggy Cherng Department of Medical EngineeringDivision of Engineering and Applied ScienceCalifornia Institute of Technology Pasadena CA 91125 USA
| | - Zhiguang Wu
- Andrew and Peggy Cherng Department of Medical EngineeringDivision of Engineering and Applied ScienceCalifornia Institute of Technology Pasadena CA 91125 USA
| | - Wei Gao
- Andrew and Peggy Cherng Department of Medical EngineeringDivision of Engineering and Applied ScienceCalifornia Institute of Technology Pasadena CA 91125 USA
| |
Collapse
|
30
|
Hajabdollahi M, Esfandiarpoor R, Khadivi P, Soroushmehr S, Karimi N, Najarian K, Samavi S. Segmentation of bleeding regions in wireless capsule endoscopy for detection of informative frames. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101565] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
31
|
Jani KK, Srivastava S, Srivastava R. Computer aided diagnosis system for ulcer detection in capsule endoscopy using optimized feature set. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-182883] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Kuntesh K. Jani
- Department of Computer Science and Engineering, Indian Institute of Technology (BHU), Varanasi, Uttar Pradesh, India
| | - Subodh Srivastava
- Department of Electronics and Communication Engineering, National Institute of Technology, Patna, Bihar, India
| | - Rajeev Srivastava
- Department of Computer Science and Engineering, Indian Institute of Technology (BHU), Varanasi, Uttar Pradesh, India
| |
Collapse
|
32
|
Artificial Intelligence-Based Classification of Multiple Gastrointestinal Diseases Using Endoscopy Videos for Clinical Diagnosis. J Clin Med 2019; 8:jcm8070986. [PMID: 31284687 PMCID: PMC6678612 DOI: 10.3390/jcm8070986] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 07/02/2019] [Accepted: 07/05/2019] [Indexed: 02/08/2023] Open
Abstract
Various techniques using artificial intelligence (AI) have resulted in a significant contribution to field of medical image and video-based diagnoses, such as radiology, pathology, and endoscopy, including the classification of gastrointestinal (GI) diseases. Most previous studies on the classification of GI diseases use only spatial features, which demonstrate low performance in the classification of multiple GI diseases. Although there are a few previous studies using temporal features based on a three-dimensional convolutional neural network, only a specific part of the GI tract was involved with the limited number of classes. To overcome these problems, we propose a comprehensive AI-based framework for the classification of multiple GI diseases by using endoscopic videos, which can simultaneously extract both spatial and temporal features to achieve better classification performance. Two different residual networks and a long short-term memory model are integrated in a cascaded mode to extract spatial and temporal features, respectively. Experiments were conducted on a combined dataset consisting of one of the largest endoscopic videos with 52,471 frames. The results demonstrate the effectiveness of the proposed classification framework for multi-GI diseases. The experimental results of the proposed model (97.057% area under the curve) demonstrate superior performance over the state-of-the-art methods and indicate its potential for clinical applications.
Collapse
|
33
|
Cummins G, Cox BF, Ciuti G, Anbarasan T, Desmulliez MPY, Cochran S, Steele R, Plevris JN, Koulaouzidis A. Gastrointestinal diagnosis using non-white light imaging capsule endoscopy. Nat Rev Gastroenterol Hepatol 2019; 16:429-447. [PMID: 30988520 DOI: 10.1038/s41575-019-0140-z] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Capsule endoscopy (CE) has proved to be a powerful tool in the diagnosis and management of small bowel disorders since its introduction in 2001. However, white light imaging (WLI) is the principal technology used in clinical CE at present, and therefore, CE is limited to mucosal inspection, with diagnosis remaining reliant on visible manifestations of disease. The introduction of WLI CE has motivated a wide range of research to improve its diagnostic capabilities through integration with other sensing modalities. These developments have the potential to overcome the limitations of WLI through enhanced detection of subtle mucosal microlesions and submucosal and/or transmural pathology, providing novel diagnostic avenues. Other research aims to utilize a range of sensors to measure physiological parameters or to discover new biomarkers to improve the sensitivity, specificity and thus the clinical utility of CE. This multidisciplinary Review summarizes research into non-WLI CE devices by organizing them into a taxonomic structure on the basis of their sensing modality. The potential of these capsules to realize clinically useful virtual biopsy and computer-aided diagnosis (CADx) is also reported.
Collapse
Affiliation(s)
- Gerard Cummins
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK.
| | | | - Gastone Ciuti
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
| | | | - Marc P Y Desmulliez
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK
| | - Sandy Cochran
- School of Engineering, University of Glasgow, Glasgow, UK
| | - Robert Steele
- School of Medicine, University of Dundee, Dundee, UK
| | - John N Plevris
- Centre for Liver and Digestive Disorders, The Royal Infirmary of Edinburgh, Edinburgh, UK
| | | |
Collapse
|
34
|
Pogorelov K, Suman S, Azmadi Hussin F, Saeed Malik A, Ostroukhova O, Riegler M, Halvorsen P, Hooi Ho S, Goh KL. Bleeding detection in wireless capsule endoscopy videos - Color versus texture features. J Appl Clin Med Phys 2019; 20:141-154. [PMID: 31251460 PMCID: PMC6698770 DOI: 10.1002/acm2.12662] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2017] [Revised: 10/15/2018] [Accepted: 05/26/2019] [Indexed: 12/22/2022] Open
Abstract
Wireless capsule endoscopy (WCE) is an effective technology that can be used to make a gastrointestinal (GI) tract diagnosis of various lesions and abnormalities. Due to a long time required to pass through the GI tract, the resulting WCE data stream contains a large number of frames which leads to a tedious job for clinical experts to perform a visual check of each and every frame of a complete patient’s video footage. In this paper, an automated technique for bleeding detection based on color and texture features is proposed. The approach combines the color information which is an essential feature for initial detection of frame with bleeding. Additionally, it uses the texture which plays an important role to extract more information from the lesion captured in the frames and allows the system to distinguish finely between borderline cases. The detection algorithm utilizes machine‐learning‐based classification methods, and it can efficiently distinguish between bleeding and nonbleeding frames and perform pixel‐level segmentation of bleeding areas in WCE frames. The performed experimental studies demonstrate the performance of the proposed bleeding detection method in terms of detection accuracy, where we are at least as good as the state‐of‐the‐art approaches. In this research, we have conducted a broad comparison of a number of different state‐of‐the‐art features and classification methods that allows building an efficient and flexible WCE video processing system.
Collapse
Affiliation(s)
- Konstantin Pogorelov
- Department of Communication Systems, Simula Research Laboratory, Fornebu, Norway
| | - Shipra Suman
- Center of Intelligent Signal & Imaging Research Group, Universiti Teknologi PETRONAS, Tronoh, Perak, Malaysia
| | - Fawnizu Azmadi Hussin
- Center of Intelligent Signal & Imaging Research Group, Universiti Teknologi PETRONAS, Tronoh, Perak, Malaysia
| | - Aamir Saeed Malik
- Center of Intelligent Signal & Imaging Research Group, Universiti Teknologi PETRONAS, Tronoh, Perak, Malaysia
| | - Olga Ostroukhova
- Research Institute of Multiprocessor Computation Systems, n.a. A.V. Kalyaev, Russia
| | - Michael Riegler
- Department of Communication Systems, Simula Research Laboratory, Fornebu, Norway
| | - Pål Halvorsen
- Department of Communication Systems, Simula Research Laboratory, Fornebu, Norway
| | - Shiaw Hooi Ho
- Department of Medicine, University of Malaya Medical Center, Kuala Lumpur, Malaysia
| | - Khean-Lee Goh
- Department of Medicine, University of Malaya Medical Center, Kuala Lumpur, Malaysia
| |
Collapse
|
35
|
Hwang Y, Park J, Lim YJ, Chun HJ. Application of Artificial Intelligence in Capsule Endoscopy: Where Are We Now? Clin Endosc 2018; 51:547-551. [PMID: 30508880 PMCID: PMC6283750 DOI: 10.5946/ce.2018.173] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2018] [Accepted: 11/02/2018] [Indexed: 12/26/2022] Open
Abstract
Unlike wired endoscopy, capsule endoscopy requires additional time for a clinical specialist to review the operation and examine the lesions. To reduce the tedious review time and increase the accuracy of medical examinations, various approaches have been reported based on artificial intelligence for computer-aided diagnosis. Recently, deep learning–based approaches have been applied to many possible areas, showing greatly improved performance, especially for image-based recognition and classification. By reviewing recent deep learning–based approaches for clinical applications, we present the current status and future direction of artificial intelligence for capsule endoscopy.
Collapse
Affiliation(s)
- Youngbae Hwang
- Intelligent Image Processing Research Center, Korea Electronics Technology Institute (KETI), Seongnam, Korea
| | - Junseok Park
- Digestive Disease Center, Institute for Digestive Research, Department of Internal Medicine, Soonchunhyang University College of Medicine, Seoul, Korea
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang, Korea
| | - Hoon Jai Chun
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Seoul, Korea
| |
Collapse
|
36
|
Iakovidis DK, Georgakopoulos SV, Vasilakakis M, Koulaouzidis A, Plagianakos VP. Detecting and Locating Gastrointestinal Anomalies Using Deep Learning and Iterative Cluster Unification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2196-2210. [PMID: 29994763 DOI: 10.1109/tmi.2018.2837002] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
This paper proposes a novel methodology for automatic detection and localization of gastrointestinal (GI) anomalies in endoscopic video frame sequences. Training is performed with weakly annotated images, using only image-level, semantic labels instead of detailed, and pixel-level annotations. This makes it a cost-effective approach for the analysis of large videoendoscopy repositories. Other advantages of the proposed methodology include its capability to suggest possible locations of GI anomalies within the video frames, and its generality, in the sense that abnormal frame detection is based on automatically derived image features. It is implemented in three phases: 1) it classifies the video frames into abnormal or normal using a weakly supervised convolutional neural network (WCNN) architecture; 2) detects salient points from deeper WCNN layers, using a deep saliency detection algorithm; and 3) localizes GI anomalies using an iterative cluster unification (ICU) algorithm. ICU is based on a pointwise cross-feature-map (PCFM) descriptor extracted locally from the detected salient points using information derived from the WCNN. Results, from extensive experimentation using publicly available collections of gastrointestinal endoscopy video frames, are presented. The data sets used include a variety of GI anomalies. Both anomaly detection and localization performance achieved, in terms of the area under receiver operating characteristic (AUC), were >80%. The highest AUC for anomaly detection was obtained on conventional gastroscopy images, reaching 96%, and the highest AUC for anomaly localization was obtained on wireless capsule endoscopy images, reaching 88%.
Collapse
|
37
|
DINOSARC: Color Features Based on Selective Aggregation of Chromatic Image Components for Wireless Capsule Endoscopy. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2018; 2018:2026962. [PMID: 30250496 PMCID: PMC6140007 DOI: 10.1155/2018/2026962] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Revised: 07/22/2018] [Accepted: 07/31/2018] [Indexed: 12/12/2022]
Abstract
Wireless Capsule Endoscopy (WCE) is a noninvasive diagnostic technique enabling the inspection of the whole gastrointestinal (GI) tract by capturing and wirelessly transmitting thousands of color images. Proprietary software "stitches" the images into videos for examination by accredited readers. However, the videos produced are of large length and consequently the reading task becomes harder and more prone to human errors. Automating the WCE reading process could contribute in both the reduction of the examination time and the improvement of its diagnostic accuracy. In this paper, we present a novel feature extraction methodology for automated WCE image analysis. It aims at discriminating various kinds of abnormalities from the normal contents of WCE images, in a machine learning-based classification framework. The extraction of the proposed features involves an unsupervised color-based saliency detection scheme which, unlike current approaches, combines both point and region-level saliency information and the estimation of local and global image color descriptors. The salient point detection process involves estimation of DIstaNces On Selective Aggregation of chRomatic image Components (DINOSARC). The descriptors are extracted from superpixels by coevaluating both point and region-level information. The main conclusions of the experiments performed on a publicly available dataset of WCE images are (a) the proposed salient point detection scheme results in significantly less and more relevant salient points; (b) the proposed descriptors are more discriminative than relevant state-of-the-art descriptors, promising a wider adoption of the proposed approach for computer-aided diagnosis in WCE.
Collapse
|
38
|
Sánchez-González A, García-Zapirain B, Sierra-Sosa D, Elmaghraby A. Automatized colon polyp segmentation via contour region analysis. Comput Biol Med 2018; 100:152-164. [DOI: 10.1016/j.compbiomed.2018.07.002] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 07/03/2018] [Accepted: 07/04/2018] [Indexed: 12/13/2022]
|
39
|
Fan S, Xu L, Fan Y, Wei K, Li L. Computer-aided detection of small intestinal ulcer and erosion in wireless capsule endoscopy images. Phys Med Biol 2018; 63:165001. [PMID: 30033931 DOI: 10.1088/1361-6560/aad51c] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
A novel computer-aided detection method based on deep learning framework was proposed to detect small intestinal ulcer and erosion in wireless capsule endoscopy (WCE) images. To the best of our knowledge, this is the first time that deep learning framework has been exploited on automated ulcer and erosion detection in WCE images. Compared with the traditional detection method, deep learning framework can produce image features directly from the data and increase recognition accuracy as well as efficiency, especially for big data. The developed method included image cropping and image compression. The AlexNet convolutional neural network was trained to the database with tens of thousands of WCE images to differentiate lesion and normal tissue. The results of ulcer and erosion detection reached a high accuracy of 95.16% and 95.34%, sensitivity of 96.80% and 93.67%, and specificity of 94.79% and 95.98%, correspondingly. The area under the receiver operating characteristic curve was over 0.98 in both of the networks. The promising results indicate that the proposed method has the potential to work in tandem with doctors to efficiently detect intestinal ulcer and erosion.
Collapse
Affiliation(s)
- Shanhui Fan
- College of Life Information Science and Instrument Engineering, Hangzhou Dianzi University, Hangzhou 310018, People's Republic of China
| | | | | | | | | |
Collapse
|
40
|
Kim SH, Yang DH, Kim JS. Current Status of Interpretation of Small Bowel Capsule Endoscopy. Clin Endosc 2018; 51:329-333. [PMID: 30078306 PMCID: PMC6078920 DOI: 10.5946/ce.2018.095] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/03/2018] [Accepted: 07/18/2018] [Indexed: 02/06/2023] Open
Abstract
Capsule endoscopy (CE) has revolutionized direct small bowel imaging and is widely used in clinical practice. Remote visualization of bowel images enables painless, well-tolerated endoscopic examinations. Small bowel CE has a high diagnostic yield and the ability to examine the entire small bowel. The diagnostic yield of CE relies on lesion detection and interpretation. In this review, issues related to lesion detection and interpretation of CE have been addressed, and the current status of automated reading software development has been reviewed. Clinical significance of an external real-time image viewer has also been described.
Collapse
Affiliation(s)
- Su Hwan Kim
- Department of Internal Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul, Korea
| | - Dong-Hoon Yang
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Jin Su Kim
- Division of Gastroenterology, Department of Internal Medicine, Seoul St. Mary's Hospital, The Catholic University of Korea College of Medicine, Seoul, Korea
| |
Collapse
|
41
|
He JY, Wu X, Jiang YG, Peng Q, Jain R. Hookworm Detection in Wireless Capsule Endoscopy Images With Deep Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2379-2392. [PMID: 29470172 DOI: 10.1109/tip.2018.2801119] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
As one of the most common human helminths, hookworm is a leading cause of maternal and child morbidity, which seriously threatens human health. Recently, wireless capsule endoscopy (WCE) has been applied to automatic hookworm detection. Unfortunately, it remains a challenging task. In recent years, deep convolutional neural network (CNN) has demonstrated impressive performance in various image and video analysis tasks. In this paper, a novel deep hookworm detection framework is proposed for WCE images, which simultaneously models visual appearances and tubular patterns of hookworms. This is the first deep learning framework specifically designed for hookworm detection in WCE images. Two CNN networks, namely edge extraction network and hookworm classification network, are seamlessly integrated in the proposed framework, which avoid the edge feature caching and speed up the classification. Two edge pooling layers are introduced to integrate the tubular regions induced from edge extraction network and the feature maps from hookworm classification network, leading to enhanced feature maps emphasizing the tubular regions. Experiments have been conducted on one of the largest WCE datasets with WCE images, which demonstrate the effectiveness of the proposed hookworm detection framework. It significantly outperforms the state-of-the-art approaches. The high sensitivity and accuracy of the proposed method in detecting hookworms shows its potential for clinical application.
Collapse
|
42
|
Performance assessment of a bleeding detection algorithm for endoscopic video based on classifier fusion method and exhaustive feature selection. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2017.10.011] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
43
|
Organic Boundary Location Based on Color-Texture of Visual Perception in Wireless Capsule Endoscopy Video. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:3090341. [PMID: 29599946 PMCID: PMC5823416 DOI: 10.1155/2018/3090341] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Revised: 09/08/2017] [Accepted: 10/23/2017] [Indexed: 11/18/2022]
Abstract
This paper addresses the problem of automatically locating the boundary between the stomach and the small intestine (the pylorus) in wireless capsule endoscopy (WCE) video. For efficient image segmentation, the color-saliency region detection (CSD) method is developed for obtaining the potentially valid region of the frame (VROF). To improve the accuracy of locating the pylorus, we design the Monitor-Judge model. On the one hand, the color-texture fusion feature of visual perception (CTVP) is constructed by grey level cooccurrence matrix (GLCM) feature from the maximum moments of the phase congruency covariance and hue-saturation histogram feature in HSI color space. On the other hand, support vector machine (SVM) classifier with the CTVP feature is utilized to locate the pylorus. The experimental results on 30 real WCE videos demonstrate that the proposed location method outperforms the related valuable techniques.
Collapse
|
44
|
Formulation and statistical evaluation of an automated algorithm for locating small bowel tumours in wireless capsule endoscopy. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2018.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
45
|
An Automated Self-Learning Quantification System to Identify Visible Areas in Capsule Endoscopy Images. J Med Syst 2017; 41:119. [DOI: 10.1007/s10916-017-0769-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Accepted: 06/29/2017] [Indexed: 10/19/2022]
|
46
|
Yuan Y, Meng MQH. Deep learning for polyp recognition in wireless capsule endoscopy images. Med Phys 2017; 44:1379-1389. [PMID: 28160514 DOI: 10.1002/mp.12147] [Citation(s) in RCA: 87] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Revised: 01/19/2017] [Accepted: 01/24/2017] [Indexed: 12/12/2022] Open
Abstract
PURPOSE Wireless capsule endoscopy (WCE) enables physicians to examine the digestive tract without any surgical operations, at the cost of a large volume of images to be analyzed. In the computer-aided diagnosis of WCE images, the main challenge arises from the difficulty of robust characterization of images. This study aims to provide discriminative description of WCE images and assist physicians to recognize polyp images automatically. METHODS We propose a novel deep feature learning method, named stacked sparse autoencoder with image manifold constraint (SSAEIM), to recognize polyps in the WCE images. Our SSAEIM differs from the traditional sparse autoencoder (SAE) by introducing an image manifold constraint, which is constructed by a nearest neighbor graph and represents intrinsic structures of images. The image manifold constraint enforces that images within the same category share similar learned features and images in different categories should be kept far away. Thus, the learned features preserve large intervariances and small intravariances among images. RESULTS The average overall recognition accuracy (ORA) of our method for WCE images is 98.00%. The accuracies for polyps, bubbles, turbid images, and clear images are 98.00%, 99.50%, 99.00%, and 95.50%, respectively. Moreover, the comparison results show that our SSAEIM outperforms existing polyp recognition methods with relative higher ORA. CONCLUSION The comprehensive results have demonstrated that the proposed SSAEIM can provide descriptive characterization for WCE images and recognize polyps in a WCE video accurately. This method could be further utilized in the clinical trials to help physicians from the tedious image reading work.
Collapse
Affiliation(s)
- Yixuan Yuan
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Max Q-H Meng
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
47
|
Chen H, Wu X, Tao G, Peng Q. Automatic content understanding with cascaded spatial–temporal deep framework for capsule endoscopy videos. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.06.077] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
48
|
|
49
|
Wang S, Cong Y, Fan H, Liu L, Li X, Yang Y, Tang Y, Zhao H, Yu H. Computer-Aided Endoscopic Diagnosis Without Human-Specific Labeling. IEEE Trans Biomed Eng 2016; 63:2347-2358. [DOI: 10.1109/tbme.2016.2530141] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
50
|
Charisis VS, Hadjileontiadis LJ. Potential of hybrid adaptive filtering in inflammatory lesion detection from capsule endoscopy images. World J Gastroenterol 2016; 22:8641-8657. [PMID: 27818583 PMCID: PMC5075542 DOI: 10.3748/wjg.v22.i39.8641] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Revised: 09/02/2016] [Accepted: 09/14/2016] [Indexed: 02/06/2023] Open
Abstract
A new feature extraction technique for the detection of lesions created from mucosal inflammations in Crohn’s disease, based on wireless capsule endoscopy (WCE) images processing is presented here. More specifically, a novel filtering process, namely Hybrid Adaptive Filtering (HAF), was developed for efficient extraction of lesion-related structural/textural characteristics from WCE images, by employing Genetic Algorithms to the Curvelet-based representation of images. Additionally, Differential Lacunarity (DLac) analysis was applied for feature extraction from the HAF-filtered images. The resulted scheme, namely HAF-DLac, incorporates support vector machines for robust lesion recognition performance. For the training and testing of HAF-DLac, an 800-image database was used, acquired from 13 patients who undertook WCE examinations, where the abnormal cases were grouped into mild and severe, according to the severity of the depicted lesion, for a more extensive evaluation of the performance. Experimental results, along with comparison with other related efforts, have shown that the HAF-DLac approach evidently outperforms them in the field of WCE image analysis for automated lesion detection, providing higher classification results, up to 93.8% (accuracy), 95.2% (sensitivity), 92.4% (specificity) and 92.6% (precision). The promising performance of HAF-DLac paves the way for a complete computer-aided diagnosis system that could support physicians’ clinical practice.
Collapse
|