1
|
Li Z, Lu J, Zhang B, Si J, Zhang H, Zhong Z, He S, Cai W, Li T. New Model and Public Online Prediction Platform for Risk Stratification of Vocal Cord Leukoplakia. Laryngoscope 2024. [PMID: 38828682 DOI: 10.1002/lary.31555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 05/18/2024] [Accepted: 05/23/2024] [Indexed: 06/05/2024]
Abstract
OBJECTIVE To extract texture features from vocal cord leukoplakia (VCL) images and establish a VCL risk stratification prediction model using machine learning (ML) techniques. METHODS A total of 462 patients with pathologically confirmed VCL were retrospectively collected and divided into low-risk and high-risk groups. We use a 5-fold cross validation method to ensure the generalization ability of the model built using the included dataset and avoid overfitting. Totally 504 texture features were extracted from each laryngoscope image. After feature selection, 10 ML classifiers were utilized to construct the model. The SHapley Additive exPlanations (SHAP) was employed for feature analysis. To evaluate the model, accuracy, sensitivity, specificity, and the area under the receiver operating characteristic (ROC) curve (AUC) were utilized. In addition, the model was transformed into an online application for public use and further tested in an independent dataset with 52 cases of VCL. RESULTS A total of 12 features were finally selected, random forest (RF) achieved the best model performance, the mean accuracy, sensitivity, specificity, and AUC of the 5-fold cross validation were 92.2 ± 4.1%, 95.6 ± 4.0%, 85.8 ± 5.8%, and 90.7 ± 4.9%, respectively. The result is much higher than the clinicians (AUC between 63.1% and 75.2%). The SHAP algorithm ranks the importance of 12 texture features to the model. The test results of the additional independent datasets were 92.3%, 95.7%, 90.0%, and 93.3%, respectively. CONCLUSION The proposed VCL risk stratification prediction model, which has been developed into a public online prediction platform, may be applied in practical clinical work. LEVEL OF EVIDENCE 3 Laryngoscope, 2024.
Collapse
Affiliation(s)
- Zufei Li
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Jinghui Lu
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, U.S.A
| | - Baiwen Zhang
- Institute of Information and Artificial Intelligence Technology, Beijing Academy of Science and Technology, Beijing, 100089, China
| | - Joshua Si
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, U.S.A
| | - Hong Zhang
- Department of Pathology, Peking University First Hospital, Beijing, China
| | - Zhen Zhong
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Shuai He
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Wenli Cai
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, U.S.A
| | - Tiancheng Li
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University First Hospital, Beijing, China
| |
Collapse
|
2
|
Guo X, Xu L, Li S, Xu M, Chu Y, Jiang Q. Cascade-EC Network: Recognition of Gastrointestinal Multiple Lesions Based on EfficientNet and CA_stm_Retinanet. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01096-9. [PMID: 38587768 DOI: 10.1007/s10278-024-01096-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 03/17/2024] [Accepted: 03/20/2024] [Indexed: 04/09/2024]
Abstract
Capsule endoscopy (CE) is non-invasive and painless during gastrointestinal examination. However, capsule endoscopy can increase the workload of image reviewing for clinicians, making it prone to missed and misdiagnosed diagnoses. Current researches primarily concentrated on binary classifiers, multiple classifiers targeting fewer than four abnormality types and detectors within a specific segment of the digestive tract, and segmenters for a single type of anomaly. Due to intra-class variations, the task of creating a unified scheme for detecting multiple gastrointestinal diseases is particularly challenging. A cascade neural network designed in this study, Cascade-EC, can automatically identify and localize four types of gastrointestinal lesions in CE images: angiectasis, bleeding, erosion, and polyp. Cascade-EC consists of EfficientNet for image classification and CA_stm_Retinanet for lesion detection and location. As the first layer of Cascade-EC, the EfficientNet network classifies CE images. CA_stm_Retinanet, as the second layer, performs the target detection and location task on the classified image. CA_stm_Retinanet adopts the general architecture of Retinanet. Its feature extraction module is the CA_stm_Backbone from the stack of CA_stm Block. CA_stm Block adopts the split-transform-merge strategy and introduces the coordinate attention. The dataset in this study is from Shanghai East Hospital, collected by PillCam SB3 and AnKon capsule endoscopes, which contains a total of 7936 images of 317 patients from the years 2017 to 2021. In the testing set, the average precision of Cascade-EC in the multi-lesions classification task was 94.55%, the average recall was 90.60%, and the average F1 score was 92.26%. The mean mAP@ 0.5 of Cascade-EC for detecting the four types of diseases is 85.88%. The experimental results show that compared with a single target detection network, Cascade-EC has better performance and can effectively assist clinicians to classify and detect multiple lesions in CE images.
Collapse
Affiliation(s)
- Xudong Guo
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Lei Xu
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Shengnan Li
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Meidong Xu
- Endoscopy Center, Department of Gastroenterology, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China.
| | - Yuan Chu
- Endoscopy Center, Department of Gastroenterology, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China
| | - Qinfen Jiang
- Department of Information Management, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China
| |
Collapse
|
3
|
Musha A, Hasnat R, Mamun AA, Ping EP, Ghosh T. Computer-Aided Bleeding Detection Algorithms for Capsule Endoscopy: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:7170. [PMID: 37631707 PMCID: PMC10459126 DOI: 10.3390/s23167170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 08/08/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023]
Abstract
Capsule endoscopy (CE) is a widely used medical imaging tool for the diagnosis of gastrointestinal tract abnormalities like bleeding. However, CE captures a huge number of image frames, constituting a time-consuming and tedious task for medical experts to manually inspect. To address this issue, researchers have focused on computer-aided bleeding detection systems to automatically identify bleeding in real time. This paper presents a systematic review of the available state-of-the-art computer-aided bleeding detection algorithms for capsule endoscopy. The review was carried out by searching five different repositories (Scopus, PubMed, IEEE Xplore, ACM Digital Library, and ScienceDirect) for all original publications on computer-aided bleeding detection published between 2001 and 2023. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) methodology was used to perform the review, and 147 full texts of scientific papers were reviewed. The contributions of this paper are: (I) a taxonomy for computer-aided bleeding detection algorithms for capsule endoscopy is identified; (II) the available state-of-the-art computer-aided bleeding detection algorithms, including various color spaces (RGB, HSV, etc.), feature extraction techniques, and classifiers, are discussed; and (III) the most effective algorithms for practical use are identified. Finally, the paper is concluded by providing future direction for computer-aided bleeding detection research.
Collapse
Affiliation(s)
- Ahmmad Musha
- Department of Electrical and Electronic Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh; (A.M.); (R.H.)
| | - Rehnuma Hasnat
- Department of Electrical and Electronic Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh; (A.M.); (R.H.)
| | - Abdullah Al Mamun
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia;
| | - Em Poh Ping
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia;
| | - Tonmoy Ghosh
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA;
| |
Collapse
|
4
|
Chu Y, Huang F, Gao M, Zou DW, Zhong J, Wu W, Wang Q, Shen XN, Gong TT, Li YY, Wang LF. Convolutional neural network-based segmentation network applied to image recognition of angiodysplasias lesion under capsule endoscopy. World J Gastroenterol 2023; 29:879-889. [PMID: 36816625 PMCID: PMC9932427 DOI: 10.3748/wjg.v29.i5.879] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/26/2022] [Accepted: 01/12/2023] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Small intestinal vascular malformations (angiodysplasias) are common causes of small intestinal bleeding. While capsule endoscopy has become the primary diagnostic method for angiodysplasia, manual reading of the entire gastrointestinal tract is time-consuming and requires a heavy workload, which affects the accuracy of diagnosis.
AIM To evaluate whether artificial intelligence can assist the diagnosis and increase the detection rate of angiodysplasias in the small intestine, achieve automatic disease detection, and shorten the capsule endoscopy (CE) reading time.
METHODS A convolutional neural network semantic segmentation model with a feature fusion method, which automatically recognizes the category of vascular dysplasia under CE and draws the lesion contour, thus improving the efficiency and accuracy of identifying small intestinal vascular malformation lesions, was proposed. Resnet-50 was used as the skeleton network to design the fusion mechanism, fuse the shallow and depth features, and classify the images at the pixel level to achieve the segmentation and recognition of vascular dysplasia. The training set and test set were constructed and compared with PSPNet, Deeplab3+, and UperNet.
RESULTS The test set constructed in the study achieved satisfactory results, where pixel accuracy was 99%, mean intersection over union was 0.69, negative predictive value was 98.74%, and positive predictive value was 94.27%. The model parameter was 46.38 M, the float calculation was 467.2 G, and the time length to segment and recognize a picture was 0.6 s.
CONCLUSION Constructing a segmentation network based on deep learning to segment and recognize angiodysplasias lesions is an effective and feasible method for diagnosing angiodysplasias lesions.
Collapse
Affiliation(s)
- Ye Chu
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Fang Huang
- Technology Platform Department, Jinshan Science & Technology (Group) Co., Ltd., Chongqing 401120, China
| | - Min Gao
- Technology Platform Department, Jinshan Science & Technology (Group) Co., Ltd., Chongqing 401120, China
| | - Duo-Wu Zou
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Jie Zhong
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Wei Wu
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Qi Wang
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Xiao-Nan Shen
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Ting-Ting Gong
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Yuan-Yi Li
- Technology Platform Department, Jinshan Science & Technology (Group) Co., Ltd., Chongqing 401120, China
| | - Li-Fu Wang
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| |
Collapse
|
5
|
Time-based self-supervised learning for Wireless Capsule Endoscopy. Comput Biol Med 2022; 146:105631. [DOI: 10.1016/j.compbiomed.2022.105631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 04/17/2022] [Accepted: 04/17/2022] [Indexed: 11/18/2022]
|
6
|
|
7
|
DFCA-Net: Dual Feature Context Aggregation Network for Bleeding Areas Segmentation in Wireless Capsule Endoscopy Images. J Med Biol Eng 2022. [DOI: 10.1007/s40846-022-00689-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
8
|
Proposing Novel Data Analytics Method for Anatomical Landmark Identification from Endoscopic Video Frames. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:8151177. [PMID: 35251578 PMCID: PMC8890842 DOI: 10.1155/2022/8151177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 12/20/2021] [Accepted: 01/07/2022] [Indexed: 11/18/2022]
Abstract
Background The anatomical landmarks contain the characteristics that are used to guide the gastroenterologists during the endoscopy. The expert can also ensure the completion of examination with the help of the anatomical landmarks. Automatic detection of anatomical landmarks in endoscopic video frames can be helpful for guiding the physicians during screening the gastrointestinal tract (GI). Method This study presents an automatic novel method for anatomical landmark detection of GI tract from endoscopic video frames based on semisupervised deep convolutional neural network (CNN) and compares the results with supervised CNN model. We consider the anatomical landmarks from Kvasir dataset that includes 500 images for each class of Z-line, pylorus, and cecum. The resolution of these images varies from 750 × 576 up to 1920 × 1072 pixels. Result Experimental results show that the supervised CNN has highly desirable performance with accuracy of 100%. Also, our proposed semisupervised CNN can compete with a slight difference similar to the CNN model. Our proposed semisupervised model trained using 1, 5, 10, and 20 percent of training data records as labeled training dataset has the average accuracy of 83%, 98%, 99%, and 99%, respectively. Conclusion The main advantage of our proposed method is achieving the high accuracy with small amount of labeled data without spending time for labeling more data. The strength of our proposed method saves the required labor, cost, and time for data labeling.
Collapse
|
9
|
Kanakatte A, Ghose A. Precise Bleeding and Red lesions localization from Capsule Endoscopy using Compact U-Net. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3089-3092. [PMID: 34891895 DOI: 10.1109/embc46164.2021.9630301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Wireless capsule endoscopy is a non-invasive and painless procedure to detect anomalies from the gastrointestinal tract. Single examination results in up to 8 hrs of video and requires between 45 - 180 mins for diagnosis depending on the complexity. Image and video computational methods are needed to increase both efficiency and accuracy of the diagnosis. In this paper, a compact U-Net with lesser encoder-decoder pairs is presented, to detect and precisely segment bleeding and red lesions from endoscopy data. The proposed compact U-Net is compared with the original U-Net and also with other methods reported in the literature. The results show the proposed compact network performs on par with the original network but with faster training and lesser memory consumption. Also, the proposed model provided a dice score of 91% outperforming other methods reported on a blind tested WCE dataset with no images from this set used for training.
Collapse
|
10
|
Automated Bowel Polyp Detection Based on Actively Controlled Capsule Endoscopy: Feasibility Study. Diagnostics (Basel) 2021; 11:diagnostics11101878. [PMID: 34679575 PMCID: PMC8535114 DOI: 10.3390/diagnostics11101878] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 10/06/2021] [Accepted: 10/09/2021] [Indexed: 01/10/2023] Open
Abstract
This paper presents an active locomotion capsule endoscope system with 5D position sensing and real-time automated polyp detection for small-bowel and colon applications. An electromagnetic actuation system (EMA) consisting of stationary electromagnets is utilized to remotely control a magnetic capsule endoscope with multi-degree-of-freedom locomotion. For position sensing, an electronic system using a magnetic sensor array is built to track the position and orientation of the magnetic capsule during movement. The system is integrated with a deep learning model, named YOLOv3, which can automatically identify colorectal polyps in real-time with an average precision of 85%. The feasibility of the proposed method concerning active locomotion and localization is validated and demonstrated through in vitro experiments in a phantom duodenum. This study provides a high-potential solution for automatic diagnostics of the bowel and colon using an active locomotion capsule endoscope, which can be applied for a clinical site in the future.
Collapse
|
11
|
Mascarenhas Saraiva M, Ribeiro T, Afonso J, Ferreira JP, Cardoso H, Andrade P, Parente MP, Jorge RN, Macedo G. Artificial Intelligence and Capsule Endoscopy: Automatic Detection of Small Bowel Blood Content Using a Convolutional Neural Network. GE-PORTUGUESE JOURNAL OF GASTROENTEROLOGY 2021; 29:331-338. [PMID: 36159196 PMCID: PMC9485980 DOI: 10.1159/000518901] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 07/14/2021] [Indexed: 12/22/2022]
Abstract
Introduction Capsule endoscopy has revolutionized the management of patients with obscure gastrointestinal bleeding. Nevertheless, reading capsule endoscopy images is time-consuming and prone to overlooking significant lesions, thus limiting its diagnostic yield. We aimed to create a deep learning algorithm for automatic detection of blood and hematic residues in the enteric lumen in capsule endoscopy exams. Methods A convolutional neural network was developed based on a total pool of 22,095 capsule endoscopy images (13,510 images containing luminal blood and 8,585 of normal mucosa or other findings). A training dataset comprising 80% of the total pool of images was defined. The performance of the network was compared to a consensus classification provided by 2 specialists in capsule endoscopy. Subsequently, we evaluated the performance of the network using an independent validation dataset (20% of total image pool), calculating its sensitivity, specificity, accuracy, and precision. Results Our convolutional neural network detected blood and hematic residues in the small bowel lumen with an accuracy and precision of 98.5 and 98.7%, respectively. The sensitivity and specificity were 98.6 and 98.9%, respectively. The analysis of the testing dataset was completed in 24 s (approximately 184 frames/s). Discussion/Conclusion We have developed an artificial intelligence tool capable of effectively detecting luminal blood. The development of these tools may enhance the diagnostic accuracy of capsule endoscopy when evaluating patients presenting with obscure small bowel bleeding.
Collapse
Affiliation(s)
- Miguel Mascarenhas Saraiva
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Porto, Portugal
- * Miguel Mascarenhas Saraiva,
| | - Tiago Ribeiro
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
| | - João Afonso
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
| | - João P.S. Ferreira
- Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
- INEGI − Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal
| | - Hélder Cardoso
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Porto, Portugal
| | - Patrícia Andrade
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Porto, Portugal
| | - Marco P.L. Parente
- Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
- INEGI − Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal
| | - Renato N. Jorge
- Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
- INEGI − Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal
| | - Guilherme Macedo
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Porto, Portugal
| |
Collapse
|
12
|
Tan ZZ, Li XD, Kong CD, Sha N, Hou YN, Zhao KH. Engineering Bacteria to Monitor the Bleeding of Animals Using Far-Red Fluorescence. ACS Sens 2021; 6:1770-1778. [PMID: 33978416 DOI: 10.1021/acssensors.0c02482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Microorganisms living in animals can function as drug delivery systems or as detectors for some diseases. Here, we developed a biosensor constructed by the deletion of hemF and harboring ho1, chuA, and bdfp1.6 in Escherichia coli. HemF is an enzyme involved in heme synthesis in E. coli. ChuA and HO1 can transfer extracellular heme into cells and generate biliverdin (BV). BDFP1.6 can bind BV autocatalytically, and it emits a far-red fluorescence signal at 667 nm. Therefore, we named this biosensor as the far-red light for bleeding detector (FRLBD). Our results indicated that the FRLBD was highly efficient and specific for detecting heme or blood in vitro. Moreover, the FRLBD could be used to detect bleeding in the zebrafish induced by aspirin, and a convolutional neural network was an appropriate model to identify the fluorescence features in the images.
Collapse
Affiliation(s)
- Zi-Zhu Tan
- State Key Laboratory of Agricultural Microbiology, Huazhong Agricultural University, Wuhan 430070, P.R. China
| | - Xiao-Dan Li
- State Key Laboratory of Agricultural Microbiology, Huazhong Agricultural University, Wuhan 430070, P.R. China
| | - Chao-Di Kong
- State Key Laboratory of Agricultural Microbiology, Huazhong Agricultural University, Wuhan 430070, P.R. China
| | - Na Sha
- State Key Laboratory of Agricultural Microbiology, Huazhong Agricultural University, Wuhan 430070, P.R. China
| | - Ya-Nan Hou
- State Key Laboratory of Agricultural Microbiology, Huazhong Agricultural University, Wuhan 430070, P.R. China
| | - Kai-Hong Zhao
- State Key Laboratory of Agricultural Microbiology, Huazhong Agricultural University, Wuhan 430070, P.R. China
| |
Collapse
|
13
|
Smedsrud PH, Thambawita V, Hicks SA, Gjestang H, Nedrejord OO, Næss E, Borgli H, Jha D, Berstad TJD, Eskeland SL, Lux M, Espeland H, Petlund A, Nguyen DTD, Garcia-Ceja E, Johansen D, Schmidt PT, Toth E, Hammer HL, de Lange T, Riegler MA, Halvorsen P. Kvasir-Capsule, a video capsule endoscopy dataset. Sci Data 2021; 8:142. [PMID: 34045470 PMCID: PMC8160146 DOI: 10.1038/s41597-021-00920-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 04/15/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is predicted to have profound effects on the future of video capsule endoscopy (VCE) technology. The potential lies in improving anomaly detection while reducing manual labour. Existing work demonstrates the promising benefits of AI-based computer-assisted diagnosis systems for VCE. They also show great potential for improvements to achieve even better results. Also, medical data is often sparse and unavailable to the research community, and qualified medical personnel rarely have time for the tedious labelling work. We present Kvasir-Capsule, a large VCE dataset collected from examinations at a Norwegian Hospital. Kvasir-Capsule consists of 117 videos which can be used to extract a total of 4,741,504 image frames. We have labelled and medically verified 47,238 frames with a bounding box around findings from 14 different classes. In addition to these labelled images, there are 4,694,266 unlabelled frames included in the dataset. The Kvasir-Capsule dataset can play a valuable role in developing better algorithms in order to reach true potential of VCE technology.
Collapse
Affiliation(s)
- Pia H Smedsrud
- SimulaMet, Oslo, Norway.
- University of Oslo, Oslo, Norway.
- Augere Medical AS, Oslo, Norway.
| | | | - Steven A Hicks
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| | | | | | - Espen Næss
- SimulaMet, Oslo, Norway
- University of Oslo, Oslo, Norway
| | - Hanna Borgli
- SimulaMet, Oslo, Norway
- University of Oslo, Oslo, Norway
| | - Debesh Jha
- SimulaMet, Oslo, Norway
- UIT The Arctic University of Norway, Tromsø, Norway
| | | | | | | | | | | | | | | | - Dag Johansen
- UIT The Arctic University of Norway, Tromsø, Norway
| | - Peter T Schmidt
- Karolinska Institutet, Department of Medicine, Solna, Sweden
- Ersta Hospital, Department of Medicine, Stockholm, Sweden
| | - Ervin Toth
- Department of Gastroenterology, Skåne University Hospital, Malmö Lund University, Malmö, Sweden
| | - Hugo L Hammer
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| | - Thomas de Lange
- Department of Medical Research, Bærum Hospital, Gjettum, Norway
- Augere Medical AS, Oslo, Norway
- Medical Department, Sahlgrenska University Hospital-Mölndal Hospital, Göteborg, Sweden
- Department of Molecular and Clinical Medicine, Sahlgrenska Academy, University of Gothenburg, Göteborg, Sweden
| | | | - Pål Halvorsen
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| |
Collapse
|
14
|
Naz J, Sharif M, Yasmin M, Raza M, Khan MA. Detection and Classification of Gastrointestinal Diseases using Machine Learning. Curr Med Imaging 2021; 17:479-490. [PMID: 32988355 DOI: 10.2174/1573405616666200928144626] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Traditional endoscopy is an invasive and painful method of examining the gastrointestinal tract (GIT) not supported by physicians and patients. To handle this issue, video endoscopy (VE) or wireless capsule endoscopy (WCE) is recommended and utilized for GIT examination. Furthermore, manual assessment of captured images is not possible for an expert physician because it's a time taking task to analyze thousands of images thoroughly. Hence, there comes the need for a Computer-Aided-Diagnosis (CAD) method to help doctors analyze images. Many researchers have proposed techniques for automated recognition and classification of abnormality in captured images. METHODS In this article, existing methods for automated classification, segmentation and detection of several GI diseases are discussed. Paper gives a comprehensive detail about these state-of-theart methods. Furthermore, literature is divided into several subsections based on preprocessing techniques, segmentation techniques, handcrafted features based techniques and deep learning based techniques. Finally, issues, challenges and limitations are also undertaken. RESULTS A comparative analysis of different approaches for the detection and classification of GI infections. CONCLUSION This comprehensive review article combines information related to a number of GI diseases diagnosis methods at one place. This article will facilitate the researchers to develop new algorithms and approaches for early detection of GI diseases detection with more promising results as compared to the existing ones of literature.
Collapse
Affiliation(s)
- Javeria Naz
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | | |
Collapse
|
15
|
Rathnamala S, Jenicka S. Automated bleeding detection in wireless capsule endoscopy images based on color feature extraction from Gaussian mixture model superpixels. Med Biol Eng Comput 2021; 59:969-987. [PMID: 33837919 DOI: 10.1007/s11517-021-02352-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 03/19/2021] [Indexed: 12/22/2022]
Abstract
Wireless capsule endoscopy is the commonly employed modality in the treatment of gastrointestinal tract pathologies. However, the time taken for interpretation of these images is very high due to the large volume of images generated. Automated detection of disorders with these images can facilitate faster clinical interventions. In this paper, we propose an automated system based on Gaussian mixture model superpixels for bleeding detection and segmentation of candidate regions. The proposed system is realized with a classic binary support vector machine classifier trained with seven features including color and texture attributes extracted from the Gaussian mixture model superpixels of the WCE images. On detection of bleeding images, bleeding regions are segmented from them, by incrementally grouping the superpixels based on deltaE color differences. Tested with standard datasets, this system exhibits best performance compared to the state-of-the-art approaches with respect to classification accuracy, feature selection, computational time, and segmentation accuracy. The proposed system achieves 99.88% accuracy, 99.83% sensitivity, and 100% specificity signifying the effectiveness of the proposed system in bleeding detection with very few classification errors.
Collapse
Affiliation(s)
- S Rathnamala
- Department of Information Technology, Sethu Institute of Technology, Virudhunagar District, Kariapatti, Tamil Nadu, 626115, India.
| | - S Jenicka
- Department of CSE, Sethu Institute of Technology, Virudhunagar District, Kariapatti, Tamil Nadu, 626115, India
| |
Collapse
|
16
|
Abstract
Video capsule endoscopy has been proven to be a beneficial tool to inspect the gastrointestinal lumen but its true impact may lie in utilization outside of traditional gastroenterology settings such as in the emergency room, the intensive care unit, and outpatient settings. Some advantages of video capsule endoscopy are that its administration does not require special training, patients do not require anesthesia, and videos can be shared with off-site consultants.
Collapse
|
17
|
Mascarenhas M, Afonso J, Andrade P, Cardoso H, Macedo G. Artificial intelligence and capsule endoscopy: unravelling the future. Ann Gastroenterol 2021; 34:300-309. [PMID: 33948053 PMCID: PMC8079882 DOI: 10.20524/aog.2021.0606] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Accepted: 12/20/2020] [Indexed: 12/22/2022] Open
Abstract
The applicability of artificial intelligence (AI) in gastroenterology is a hot topic because of its disruptive nature. Capsule endoscopy plays an important role in several areas of digestive pathology, namely in the investigation of obscure hemorrhagic lesions and the management of inflammatory bowel disease. Therefore, there is growing interest in the use of AI in capsule endoscopy. Several studies have demonstrated the enormous potential of using convolutional neural networks in various areas of capsule endoscopy. The exponential development of the usefulness of AI in capsule endoscopy requires consideration of its medium- and long-term impact on clinical practice. Indeed, the advent of deep learning in the field of capsule endoscopy, with its evolutionary character, could lead to a paradigm shift in clinical activity in this setting. In this review, we aim to illustrate the state of the art of AI in the field of capsule endoscopy.
Collapse
Affiliation(s)
| | - João Afonso
- Gastroenterology Department, Hospital de São João, Porto, Portugal
| | - Patrícia Andrade
- Gastroenterology Department, Hospital de São João, Porto, Portugal
| | - Hélder Cardoso
- Gastroenterology Department, Hospital de São João, Porto, Portugal
| | - Guilherme Macedo
- Gastroenterology Department, Hospital de São João, Porto, Portugal
| |
Collapse
|
18
|
Rahim T, Usman MA, Shin SY. A survey on contemporary computer-aided tumor, polyp, and ulcer detection methods in wireless capsule endoscopy imaging. Comput Med Imaging Graph 2020; 85:101767. [DOI: 10.1016/j.compmedimag.2020.101767] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 07/13/2020] [Accepted: 07/18/2020] [Indexed: 12/12/2022]
|