1
|
Choi KS, Park D, Kim JS, Cheung DY, Lee BI, Cho YS, Kim JI, Lee S, Lee HH. Deep learning in negative small-bowel capsule endoscopy improves small-bowel lesion detection and diagnostic yield. Dig Endosc 2024; 36:437-445. [PMID: 37612137 DOI: 10.1111/den.14670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 08/20/2023] [Indexed: 08/25/2023]
Abstract
OBJECTIVES Although several studies have shown the usefulness of artificial intelligence to identify abnormalities in small-bowel capsule endoscopy (SBCE) images, few studies have proven its actual clinical usefulness. Thus, the aim of this study was to examine whether meaningful findings could be obtained when negative SBCE videos were reanalyzed with a deep convolutional neural network (CNN) model. METHODS Clinical data of patients who received SBCE for suspected small-bowel bleeding at two academic hospitals between February 2018 and July 2020 were retrospectively collected. All SBCE videos read as negative were reanalyzed with the CNN algorithm developed in our previous study. Meaningful findings such as angioectasias and ulcers were finally decided after reviewing CNN-selected images by two gastroenterologists. RESULTS Among 202 SBCE videos, 103 (51.0%) were read as negative by humans. Meaningful findings were detected in 63 (61.2%) of these 103 videos after reanalyzing them with the CNN model. There were 79 red spots or angioectasias in 40 videos and 66 erosions or ulcers in 35 videos. After reanalysis, the diagnosis was changed for 10 (10.3%) patients who had initially negative SBCE results. During a mean follow-up of 16.5 months, rebleeding occurred in 19 (18.4%) patients. The rebleeding rate was 23.6% (13/55) for patients with meaningful findings and 16.1% (5/31) for patients without meaningful findings (P = 0.411). CONCLUSION Our CNN algorithm detected meaningful findings in negative SBCE videos that were missed by humans. The use of deep CNN for SBCE image reading is expected to compensate for human error.
Collapse
Affiliation(s)
- Kyung Seok Choi
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - DoGyeom Park
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
| | - Jin Su Kim
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Dae Young Cheung
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Bo-In Lee
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Young-Seok Cho
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Jin Il Kim
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Seungchul Lee
- Institute for Convergence Research and Education in Advanced Technology, Yonsei University, Seoul, Korea
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
- Graduate School of Artificial Intelligence, Pohang University of Science and Technology, Pohang, Korea
| | - Han Hee Lee
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| |
Collapse
|
2
|
Li Q, Xie W, Wang Y, Qin K, Huang M, Liu T, Chen Z, Chen L, Teng L, Fang Y, Ye L, Chen Z, Zhang J, Li A, Yang W, Liu S. A Deep Learning Application of Capsule Endoscopic Gastric Structure Recognition Based on a Transformer Model. J Clin Gastroenterol 2024:00004836-990000000-00271. [PMID: 38457410 DOI: 10.1097/mcg.0000000000001972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 12/26/2023] [Indexed: 03/10/2024]
Abstract
BACKGROUND Gastric structure recognition systems have become increasingly necessary for the accurate diagnosis of gastric lesions in capsule endoscopy. Deep learning, especially using transformer models, has shown great potential in the recognition of gastrointestinal (GI) images according to self-attention. This study aims to establish an identification model of capsule endoscopy gastric structures to improve the clinical applicability of deep learning to endoscopic image recognition. METHODS A total of 3343 wireless capsule endoscopy videos collected at Nanfang Hospital between 2011 and 2021 were used for unsupervised pretraining, while 2433 were for training and 118 were for validation. Fifteen upper GI structures were selected for quantifying the examination quality. We also conducted a comparison of the classification performance between the artificial intelligence model and endoscopists by the accuracy, sensitivity, specificity, and positive and negative predictive values. RESULTS The transformer-based AI model reached a relatively high level of diagnostic accuracy in gastric structure recognition. Regarding the performance of identifying 15 upper GI structures, the AI model achieved a macroaverage accuracy of 99.6% (95% CI: 99.5-99.7), a macroaverage sensitivity of 96.4% (95% CI: 95.3-97.5), and a macroaverage specificity of 99.8% (95% CI: 99.7-99.9) and achieved a high level of interobserver agreement with endoscopists. CONCLUSIONS The transformer-based AI model can accurately evaluate the gastric structure information of capsule endoscopy with the same performance as that of endoscopists, which will provide tremendous help for doctors in making a diagnosis from a large number of images and improve the efficiency of examination.
Collapse
Affiliation(s)
- Qingyuan Li
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital
| | - Weijie Xie
- School of Biomedical Engineering
- Department of Information, Guangzhou First People's Hospital, School of Medicine, South China University of Technology
| | - Yusi Wang
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital
| | - Kaiwen Qin
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital
| | - Mei Huang
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital
| | | | | | - Lu Chen
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital
| | - Lan Teng
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital
| | - Yuxin Fang
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital
| | | | - Zhenyu Chen
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital
| | - Jie Zhang
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital
| | - Aimin Li
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital
| | - Wei Yang
- School of Biomedical Engineering
- Pazhou Lab, Guangzhou, Guangdong
| | - Side Liu
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital
- Pazhou Lab, Guangzhou, Guangdong
- Department of Gastroenterology, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Zhuhai, China
| |
Collapse
|
3
|
Brzeski A, Dziubich T, Krawczyk H. Visual Features for Improving Endoscopic Bleeding Detection Using Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:9717. [PMID: 38139563 PMCID: PMC10748269 DOI: 10.3390/s23249717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/19/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023]
Abstract
The presented paper investigates the problem of endoscopic bleeding detection in endoscopic videos in the form of a binary image classification task. A set of definitions of high-level visual features of endoscopic bleeding is introduced, which incorporates domain knowledge from the field. The high-level features are coupled with respective feature descriptors, enabling automatic capture of the features using image processing methods. Each of the proposed feature descriptors outputs a feature activation map in the form of a grayscale image. Acquired feature maps can be appended in a straightforward way to the original color channels of the input image and passed to the input of a convolutional neural network during the training and inference steps. An experimental evaluation is conducted to compare the classification ROC AUC of feature-extended convolutional neural network models with baseline models using regular color image inputs. The advantage of feature-extended models is demonstrated for the Resnet and VGG convolutional neural network architectures.
Collapse
Affiliation(s)
- Adam Brzeski
- Faculty of Electronics, Telecommunications and Informatics, Gdańsk University of Technology, 80-233 Gdańsk, Poland; (T.D.); (H.K.)
| | | | | |
Collapse
|
4
|
O'Hara FJ, Mc Namara D. Capsule endoscopy with artificial intelligence-assisted technology: Real-world usage of a validated AI model for capsule image review. Endosc Int Open 2023; 11:E970-E975. [PMID: 37828977 PMCID: PMC10567136 DOI: 10.1055/a-2161-1816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 08/25/2023] [Indexed: 10/14/2023] Open
Abstract
Background and study aims Capsule endoscopy is a time-consuming procedure with a significance error rate. Artificial intelligence (AI) can potentially reduce reading time significantly by reducing the number of images that need human review. An OMOM Artificial Intelligence-enabled small bowel capsule has been recently trained and validated for small bowel capsule endoscopy video review. This study aimed to assess its performance in a real-world setting in comparison with standard reading methods. Patients and methods In this single-center retrospective study, 40 patient studies performed using the OMOM capsule were analyzed first with standard reading methods and later using AI-assisted reading. Reading time, pathology identified, intestinal landmark identification and bowel preparation assessment (Brotz Score) were compared. Results Overall diagnosis correlated 100% between the two reading methods. In a per-lesion analysis, 1293 images of significant lesions were identified combining standard and AI-assisted reading methods. AI-assisted reading captured 1268 (98.1%, 95% CI 97.15-98.7) of these findings while standard reading mode captured 1114 (86.2%, 95% confidence interval 84.2-87.9), P < 0.001. Mean reading time went from 29.7 minutes with standard reading to 2.3 minutes with AI-assisted reading ( P < 0.001), for an average time saving of 27.4 minutes per study. Time of first cecal image showed a wide discrepancy between AI and standard reading of 99.2 minutes (r = 0.085, P = 0.68). Bowel cleansing evaluation agreed in 97.4% (r = 0.805 P < 0.001). Conclusions AI-assisted reading has shown significant time savings without reducing sensitivity in this study. Limitations remain in the evaluation of other indicators.
Collapse
Affiliation(s)
- Fintan John O'Hara
- Gastroenterology, Tallaght University Hospital, Dublin, Ireland
- Medicine, Trinity College Dublin School of Medicine, Dublin, Ireland
| | - Deirdre Mc Namara
- Gastroenterology, Tallaght University Hospital, Dublin, Ireland
- Medicine, Trinity College Dublin School of Medicine, Dublin, Ireland
| |
Collapse
|
5
|
Laiz P, Vitrià J, Gilabert P, Wenzek H, Malagelada C, Watson AJM, Seguí S. Anatomical landmarks localization for capsule endoscopy studies. Comput Med Imaging Graph 2023; 108:102243. [PMID: 37267757 DOI: 10.1016/j.compmedimag.2023.102243] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 02/25/2023] [Accepted: 05/05/2023] [Indexed: 06/04/2023]
Abstract
Wireless Capsule Endoscopy is a medical procedure that uses a small, wireless camera to capture images of the inside of the digestive tract. The identification of the entrance and exit of the small bowel and of the large intestine is one of the first tasks that need to be accomplished to read a video. This paper addresses the design of a clinical decision support tool to detect these anatomical landmarks. We have developed a system based on deep learning that combines images, timestamps, and motion data to achieve state-of-the-art results. Our method does not only classify the images as being inside or outside the studied organs, but it is also able to identify the entrance and exit frames. The experiments performed with three different datasets (one public and two private) show that our system is able to approximate the landmarks while achieving high accuracy on the classification problem (inside/outside of the organ). When comparing the entrance and exit of the studied organs, the distance between predicted and real landmarks is reduced from 1.5 to 10 times with respect to previous state-of-the-art methods.
Collapse
Affiliation(s)
- Pablo Laiz
- Department of Mathematics and Computer Science, Universitat de Barcelona, Barcelona, Spain.
| | - Jordi Vitrià
- Department of Mathematics and Computer Science, Universitat de Barcelona, Barcelona, Spain
| | - Pere Gilabert
- Department of Mathematics and Computer Science, Universitat de Barcelona, Barcelona, Spain
| | | | | | | | - Santi Seguí
- Department of Mathematics and Computer Science, Universitat de Barcelona, Barcelona, Spain
| |
Collapse
|
6
|
Pecere S, Chiappetta MF, Del Vecchio LE, Despott E, Dray X, Koulaouzidis A, Fuccio L, Murino A, Rondonotti E, Spaander M, Spada C. The evolving role of small-bowel capsule endoscopy. Best Pract Res Clin Gastroenterol 2023; 64-65:101857. [PMID: 37652655 DOI: 10.1016/j.bpg.2023.101857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 08/08/2023] [Indexed: 09/02/2023]
Affiliation(s)
- Silvia Pecere
- Digestive Endoscopy Unit, Fondazione Policlinico Universitario A. Gemelli, IRCCS, Rome, Italy; Centre for Endoscopic Research Therapeutics and Training (CERTT), Università Cattolica del Sacro Cuore, Rome, Italy
| | - Michele Francesco Chiappetta
- Digestive Endoscopy Unit, Fondazione Policlinico Universitario A. Gemelli, IRCCS, Rome, Italy; Centre for Endoscopic Research Therapeutics and Training (CERTT), Università Cattolica del Sacro Cuore, Rome, Italy.
| | - Livio Enrico Del Vecchio
- Digestive Endoscopy Unit, Fondazione Policlinico Universitario A. Gemelli, IRCCS, Rome, Italy; Centre for Endoscopic Research Therapeutics and Training (CERTT), Università Cattolica del Sacro Cuore, Rome, Italy
| | - Edward Despott
- Royal Free Unit for Endoscopy and Centre for Gastroenterology, UCL Institute for Liver and Digestive Health, Royal Free NHS Foundation Trust, London, United Kingdom; Wolfson Unit for Endoscopy, St Mark's Hospital and Academic Institute, Imperial College London, London, United Kingdom
| | - Xavier Dray
- Sorbonne University, Centre for Digestive Endoscopy, Saint Antoine Hospital, APHP, Paris, France
| | | | - Lorenzo Fuccio
- IRCCS Azienda Ospedaliero-Universitaria di Bologna, Gastroenterology Unit, Department of Medical and Surgical Sciences, University of Bologna, Bologna, Italy
| | - Alberto Murino
- Royal Free Unit for Endoscopy and Centre for Gastroenterology, UCL Institute for Liver and Digestive Health, Royal Free NHS Foundation Trust, London, United Kingdom; Wolfson Unit for Endoscopy, St Mark's Hospital and Academic Institute, Imperial College London, London, United Kingdom
| | | | - Manon Spaander
- Department of Gastroenterology and Hepatology, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - Cristiano Spada
- Digestive Endoscopy Unit, Fondazione Policlinico Universitario A. Gemelli, IRCCS, Rome, Italy; Centre for Endoscopic Research Therapeutics and Training (CERTT), Università Cattolica del Sacro Cuore, Rome, Italy
| |
Collapse
|
7
|
Ribeiro T, Mascarenhas Saraiva MJ, Afonso J, Cardoso P, Mendes F, Martins M, Andrade AP, Cardoso H, Mascarenhas Saraiva M, Ferreira J, Macedo G. Design of a Convolutional Neural Network as a Deep Learning Tool for the Automatic Classification of Small-Bowel Cleansing in Capsule Endoscopy. MEDICINA (KAUNAS, LITHUANIA) 2023; 59:medicina59040810. [PMID: 37109768 PMCID: PMC10145655 DOI: 10.3390/medicina59040810] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Revised: 04/03/2023] [Accepted: 04/06/2023] [Indexed: 04/29/2023]
Abstract
Background and objectives: Capsule endoscopy (CE) is a non-invasive method to inspect the small bowel that, like other enteroscopy methods, requires adequate small-bowel cleansing to obtain conclusive results. Artificial intelligence (AI) algorithms have been seen to offer important benefits in the field of medical imaging over recent years, particularly through the adaptation of convolutional neural networks (CNNs) to achieve more efficient image analysis. Here, we aimed to develop a deep learning model that uses a CNN to automatically classify the quality of intestinal preparation in CE. Methods: A CNN was designed based on 12,950 CE images obtained at two clinical centers in Porto (Portugal). The quality of the intestinal preparation was classified for each image as: excellent, ≥90% of the image surface with visible mucosa; satisfactory, 50-90% of the mucosa visible; and unsatisfactory, <50% of the mucosa visible. The total set of images was divided in an 80:20 ratio to establish training and validation datasets, respectively. The CNN prediction was compared with the classification established by consensus of a group of three experts in CE, currently considered the gold standard to evaluate cleanliness. Subsequently, how the CNN performed in diagnostic terms was evaluated using an independent validation dataset. Results: Among the images obtained, 3633 were designated as unsatisfactory preparation, 6005 satisfactory preparation, and 3312 with excellent preparation. When differentiating the classes of small-bowel preparation, the algorithm developed here achieved an overall accuracy of 92.1%, with a sensitivity of 88.4%, a specificity of 93.6%, a positive predictive value of 88.5%, and a negative predictive value of 93.4%. The area under the curve for the detection of excellent, satisfactory, and unsatisfactory classes was 0.98, 0.95, and 0.99, respectively. Conclusions: A CNN-based tool was developed to automatically classify small-bowel preparation for CE, and it was seen to accurately classify intestinal preparation for CE. The development of such a system could enhance the reproducibility of the scales used for such purposes.
Collapse
Affiliation(s)
- Tiago Ribeiro
- Department of Gasteroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Gastroenterology and Hepatology, WGO Gastroenterology and Hepatology Training Centre, 4050-345 Porto, Portugal
| | - Miguel José Mascarenhas Saraiva
- Department of Gasteroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Gastroenterology and Hepatology, WGO Gastroenterology and Hepatology Training Centre, 4050-345 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal
| | - João Afonso
- Department of Gasteroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Gastroenterology and Hepatology, WGO Gastroenterology and Hepatology Training Centre, 4050-345 Porto, Portugal
| | - Pedro Cardoso
- Department of Gasteroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Gastroenterology and Hepatology, WGO Gastroenterology and Hepatology Training Centre, 4050-345 Porto, Portugal
| | - Francisco Mendes
- Department of Gasteroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Gastroenterology and Hepatology, WGO Gastroenterology and Hepatology Training Centre, 4050-345 Porto, Portugal
| | - Miguel Martins
- Department of Gasteroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Gastroenterology and Hepatology, WGO Gastroenterology and Hepatology Training Centre, 4050-345 Porto, Portugal
| | - Ana Patrícia Andrade
- Department of Gasteroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Gastroenterology and Hepatology, WGO Gastroenterology and Hepatology Training Centre, 4050-345 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal
| | - Hélder Cardoso
- Department of Gasteroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Gastroenterology and Hepatology, WGO Gastroenterology and Hepatology Training Centre, 4050-345 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal
| | | | - João Ferreira
- Department of Mechanical Engineering, Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
- INEGI-Institute of Science and Innovation in Mechanical and Industrial Engineering, 4200-465 Porto, Portugal
| | - Guilherme Macedo
- Department of Gasteroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Gastroenterology and Hepatology, WGO Gastroenterology and Hepatology Training Centre, 4050-345 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal
| |
Collapse
|
8
|
Zhou JX, Yang Z, Xi DH, Dai SJ, Feng ZQ, Li JY, Xu W, Wang H. Enhanced segmentation of gastrointestinal polyps from capsule endoscopy images with artifacts using ensemble learning. World J Gastroenterol 2022; 28:5931-5943. [PMID: 36405108 PMCID: PMC9669827 DOI: 10.3748/wjg.v28.i41.5931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 08/31/2022] [Accepted: 10/19/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Endoscopy artifacts are widespread in real capsule endoscopy (CE) images but not in high-quality standard datasets.
AIM To improve the segmentation performance of polyps from CE images with artifacts based on ensemble learning.
METHODS We collected 277 polyp images with CE artifacts from 5760 h of videos from 480 patients at Guangzhou First People’s Hospital from January 2016 to December 2019. Two public high-quality standard external datasets were retrieved and used for the comparison experiments. For each dataset, we randomly segmented the data into training, validation, and testing sets for model training, selection, and testing. We compared the performance of the base models and the ensemble model in segmenting polyps from images with artifacts.
RESULTS The performance of the semantic segmentation model was affected by artifacts in the sample images, which also affected the results of polyp detection by CE using a single model. The evaluation based on real datasets with artifacts and standard datasets showed that the ensemble model of all state-of-the-art models performed better than the best corresponding base learner on the real dataset with artifacts. Compared with the corresponding optimal base learners, the intersection over union (IoU) and dice of the ensemble learning model increased to different degrees, ranging from 0.08% to 7.01% and 0.61% to 4.93%, respectively. Moreover, in the standard datasets without artifacts, most of the ensemble models were slightly better than the base learner, as demonstrated by the IoU and dice increases ranging from -0.28% to 1.20% and -0.61% to 0.76%, respectively.
CONCLUSION Ensemble learning can improve the segmentation accuracy of polyps from CE images with artifacts. Our results demonstrated an improvement in the detection rate of polyps with interference from artifacts.
Collapse
Affiliation(s)
- Jun-Xiao Zhou
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| | - Zhan Yang
- School of Information, Renmin University of China, Beijing 100872, China
| | - Ding-Hao Xi
- School of Information, Renmin University of China, Beijing 100872, China
| | - Shou-Jun Dai
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| | - Zhi-Qiang Feng
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| | - Jun-Yan Li
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| | - Wei Xu
- School of Information, Renmin University of China, Beijing 100872, China
| | - Hong Wang
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| |
Collapse
|
9
|
Hanscom M, Cave DR. Endoscopic capsule robot-based diagnosis, navigation and localization in the gastrointestinal tract. Front Robot AI 2022; 9:896028. [PMID: 36119725 PMCID: PMC9479458 DOI: 10.3389/frobt.2022.896028] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 08/08/2022] [Indexed: 01/10/2023] Open
Abstract
The proliferation of video capsule endoscopy (VCE) would not have been possible without continued technological improvements in imaging and locomotion. Advancements in imaging include both software and hardware improvements but perhaps the greatest software advancement in imaging comes in the form of artificial intelligence (AI). Current research into AI in VCE includes the diagnosis of tumors, gastrointestinal bleeding, Crohn’s disease, and celiac disease. Other advancements have focused on the improvement of both camera technologies and alternative forms of imaging. Comparatively, advancements in locomotion have just started to approach clinical use and include onboard controlled locomotion, which involves miniaturizing a motor to incorporate into the video capsule, and externally controlled locomotion, which involves using an outside power source to maneuver the capsule itself. Advancements in locomotion hold promise to remove one of the major disadvantages of VCE, namely, its inability to obtain targeted diagnoses. Active capsule control could in turn unlock additional diagnostic and therapeutic potential, such as the ability to obtain targeted tissue biopsies or drug delivery. With both advancements in imaging and locomotion has come a corresponding need to be better able to process generated images and localize the capsule’s position within the gastrointestinal tract. Technological advancements in computation performance have led to improvements in image compression and transfer, as well as advancements in sensor detection and alternative methods of capsule localization. Together, these advancements have led to the expansion of VCE across a number of indications, including the evaluation of esophageal and colon pathologies including esophagitis, esophageal varices, Crohn’s disease, and polyps after incomplete colonoscopy. Current research has also suggested a role for VCE in acute gastrointestinal bleeding throughout the gastrointestinal tract, as well as in urgent settings such as the emergency department, and in resource-constrained settings, such as during the COVID-19 pandemic. VCE has solidified its role in the evaluation of small bowel bleeding and earned an important place in the practicing gastroenterologist’s armamentarium. In the next few decades, further improvements in imaging and locomotion promise to open up even more clinical roles for the video capsule as a tool for non-invasive diagnosis of lumenal gastrointestinal pathologies.
Collapse
|
10
|
Patel A, Vedantam D, Poman DS, Motwani L, Asif N. Obscure Gastrointestinal Bleeding and Capsule Endoscopy: A Win-Win Situation or Not? Cureus 2022; 14:e27137. [PMID: 36017285 PMCID: PMC9392966 DOI: 10.7759/cureus.27137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/22/2022] [Indexed: 11/05/2022] Open
Abstract
Obscure gastrointestinal bleeding (OGIB) refers to bleeding of uncertain origin that persists or recurs after negative workup using any of the radiologic evaluation modalities. It can be divided into two types based on whether clinically evident bleeding is present, namely, obscure overt and obscure occult bleeding. As the visualization of the bowel mucosa is challenging, capsule endoscopy (CE) is the ideal go-to procedure as the process is wireless, ingestible, small, disposable, and, most importantly, non-invasive. This review article has compiled various studies to shed light on the guidelines for using CE, its structure and procedure, patient preferences, diagnostic yield, cost-effectiveness, and the future. The goal of this review is to show the influence of CE on OGIB on the aspects mentioned earlier.
Collapse
|
11
|
Alemanni LV, Fabbri S, Rondonotti E, Mussetto A. Recent developments in small bowel endoscopy: the "black box" is now open! Clin Endosc 2022; 55:473-479. [PMID: 35831981 PMCID: PMC9329645 DOI: 10.5946/ce.2022.113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 05/11/2022] [Indexed: 12/09/2022] Open
Abstract
Over the last few years, capsule endoscopy has been established as a fundamental device in the practicing gastroenterologist’s toolbox. Its utilization in diagnostic algorithms for suspected small bowel bleeding, Crohn’s disease, and small bowel tumors has been approved by several guidelines. The advent of double-balloon enteroscopy has significantly increased the therapeutic possibilities and release of multiple devices (single-balloon enteroscopy and spiral enteroscopy) aimed at improving the performance of small bowel enteroscopy. Recently, some important innovations have appeared in the small bowel endoscopy scene, providing further improvement to its evolution. Artificial intelligence in capsule endoscopy should increase diagnostic accuracy and reading efficiency, and the introduction of motorized spiral enteroscopy into clinical practice could also improve the therapeutic yield. This review focuses on the most recent studies on artificial-intelligence-assisted capsule endoscopy and motorized spiral enteroscopy.
Collapse
Affiliation(s)
- Luigina Vanessa Alemanni
- Gastroenterology Unit, Santa Maria delle Croci Hospital, Ravenna, Italy.,Department of Medical and Surgical Sciences, S. Orsola-Malpighi Hospital, Bologna, Italy
| | - Stefano Fabbri
- Gastroenterology Unit, Santa Maria delle Croci Hospital, Ravenna, Italy.,Department of Medical and Surgical Sciences, S. Orsola-Malpighi Hospital, Bologna, Italy
| | | | | |
Collapse
|
12
|
Ionescu A, Glodeanu A, Ionescu M, Zaharie S, Ciurea A, Golli A, Mavritsakis N, Popa D, Vere C. Clinical impact of wireless capsule endoscopy for small bowel investigation (Review). Exp Ther Med 2022; 23:262. [PMID: 35251328 PMCID: PMC8892621 DOI: 10.3892/etm.2022.11188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 11/12/2021] [Indexed: 11/06/2022] Open
Abstract
Wireless capsule endoscopy is currently considered the gold standard in the investigation of the small bowel. It is both practical for physicians and easily accepted by patients. Prior to its development, two types of imaging investigations of the small bowel were available: radiologic and endoscopic. The first category is less invasive and comfortable for patients; it presents the ensemble of the small bowel, but it may imply radiation exposure. Images are constructed based on signals emitted by various equipment and require special interpretation. Endoscopic techniques provide real-time colored images acquired by miniature cameras from inside the small bowel, require interpretation only from a medical point of view, may allow the possibility to perform biopsies, but the investigation only covers a part of the small bowel and are more difficult to accept by patients. Wireless capsule endoscopy is the current solution that overcomes a part of the previous drawbacks: it covers the entire small bowel, it provides real-time images acquired by cameras, it is painless for patients, and it represents an abundant source of information for physicians. Yet, it lacks motion control and the possibility to perform biopsies or administer drugs. However, significant effort has been oriented in these directions by technical and medical teams, and more advanced capsules will surely be available in the following years.
Collapse
Affiliation(s)
- Alin Ionescu
- Department of Medical History, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Adina Glodeanu
- Department of Internal Medicine, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Mihaela Ionescu
- Department of Medical Informatics and Biostatistics, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Sorin Zaharie
- Department of Nephrology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Ana Ciurea
- Department of Oncology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Andreea Golli
- Department of Public Health Management, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Nikolaos Mavritsakis
- Department of Physical Education and Sport, ‘1 Decembrie 1918’ University, 510009 Alba Iulia, Romania
| | - Didi Popa
- Department of Information and Communication Technology, University of Craiova, 200585 Craiova, Romania
| | - Cristin Vere
- Department of Gastroenterology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| |
Collapse
|
13
|
Chen Z, Lin L, Wu C, Li C, Xu R, Sun Y. Artificial intelligence for assisting cancer diagnosis and treatment in the era of precision medicine. Cancer Commun (Lond) 2021; 41:1100-1115. [PMID: 34613667 PMCID: PMC8626610 DOI: 10.1002/cac2.12215] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Revised: 07/10/2021] [Accepted: 09/01/2021] [Indexed: 12/12/2022] Open
Abstract
Over the past decade, artificial intelligence (AI) has contributed substantially to the resolution of various medical problems, including cancer. Deep learning (DL), a subfield of AI, is characterized by its ability to perform automated feature extraction and has great power in the assimilation and evaluation of large amounts of complicated data. On the basis of a large quantity of medical data and novel computational technologies, AI, especially DL, has been applied in various aspects of oncology research and has the potential to enhance cancer diagnosis and treatment. These applications range from early cancer detection, diagnosis, classification and grading, molecular characterization of tumors, prediction of patient outcomes and treatment responses, personalized treatment, automatic radiotherapy workflows, novel anti-cancer drug discovery, and clinical trials. In this review, we introduced the general principle of AI, summarized major areas of its application for cancer diagnosis and treatment, and discussed its future directions and remaining challenges. As the adoption of AI in clinical use is increasing, we anticipate the arrival of AI-powered cancer care.
Collapse
Affiliation(s)
- Zi‐Hang Chen
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
- Zhongshan School of MedicineSun Yat‐sen UniversityGuangzhouGuangdong510080P. R. China
| | - Li Lin
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Chen‐Fei Wu
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Chao‐Feng Li
- Artificial Intelligence LaboratoryState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Rui‐Hua Xu
- Department of Medical OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Ying Sun
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| |
Collapse
|
14
|
Xiao YF, Wu ZX, He S, Zhou YY, Zhao YB, He JL, Peng X, Yang ZX, Lv QJ, Yang H, Bai JY, Fan CQ, Tang B, Hu CJ, Jie MM, Liu E, Lin H, Koulaouzidis A, Zhao XY, Yang SM, Xie X. Fully automated magnetically controlled capsule endoscopy for examination of the stomach and small bowel: a prospective, feasibility, two-centre study. Lancet Gastroenterol Hepatol 2021; 6:914-921. [PMID: 34555347 DOI: 10.1016/s2468-1253(21)00274-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 07/22/2021] [Accepted: 07/26/2021] [Indexed: 12/20/2022]
Abstract
BACKGROUND The use of magnetically controlled capsules for gastroscopy is in the early stages of clinical adoption. We aimed to evaluate the safety and efficacy of a fully automated magnetically controlled capsule endoscopy (FAMCE) system in clinical practice for gastroscopy and small bowel examination. METHODS We did a prospective, comparative study to evaluate the safety and efficacy of FAMCE. Patients from two hospitals in Chongqing, China were consecutively enrolled. Eligible participants were aged 18-80 years with suspected gastric pathology and no previous surgery. Participants underwent FAMCE for screening of gastric lesions, then conventional transoral gastroscopy 2 h later, and stomach examination results were compared. The primary outcome was the rate of complete detection of gastric anatomy landmarks (cardia, fundus, body, angulus, antrum, and pylorus) by FAMCE. Secondary outcomes were the time required for gastric completion by FAMCE, the rate of detection of gastric lesions by FAMCE compared with conventional transoral gastroscopy, and the rate of complete small bowel examination. Adverse events were also evaluated. The study was registered in the Chinese Clinical Trial Registry, ChiCTR2000040507. FINDINGS Between May 12 and Aug 17, 2020, 114 patients (mean age 44·0 years [IQR 34·0-55·0]; 63 [55%] female) were enrolled. The rate of complete detection of gastric anatomical structures by FAMCE was 100% (95% CI 99·3-100·0). The concordance between FAMCE and conventional transoral gastroscopy was 99·61% (99·45-99·78). The mean completion time of a gastroscopy with FAMCE was 19·17 min (SD 1·43; median 19·00, IQR 19·00-20·00), compared with 5·21 min (2·00; 5·18, 3·68-6·45) for conventional transoral gastroscopy. In 114 enrolled patients, 214 lesions were detected by FAMCE and conventional transoral gastroscopy. Of those, 193 were detected by both modalities. FAMCE missed five pathologies (four cases of gastritis and one polyp), whereas conventional transoral gastroscopy missed 16 pathologies (12 cases of gastritis, one polyp, one fundal xanthoma, and two antral erosions). FAMCE was able to provide a complete small bowel examination for all 114 patients and detected intestinal lesions in 50 (44%) patients. During the study, two (2%) patients experienced adverse events. No serious adverse events were recorded, and there was no evidence of capsule retention. INTERPRETATION The performance of FAMCE is similar to conventional transoral gastroscopy in completion of gastric examination and lesion detection. Furthermore, it can provide a complete small bowel examination. Therefore, FAMCE could be effective method for examination of the gastrointestinal tract. FUNDING Chinese National Key Research and Development Program.
Collapse
Affiliation(s)
- Yu-Feng Xiao
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Zhi-Xuan Wu
- Department of Gastroenterology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Song He
- Department of Gastroenterology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Yuan-Yuan Zhou
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Yong-Bing Zhao
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Jia-Lin He
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Xue Peng
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Zhao-Xia Yang
- Department of Gastroenterology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Qing-Jian Lv
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Huan Yang
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Jian-Ying Bai
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Chao-Qiang Fan
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Bo Tang
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Chang-Jiang Hu
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Meng-Meng Jie
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - En Liu
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Hui Lin
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | | | - Xiao-Yan Zhao
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Shi-Ming Yang
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Xia Xie
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China.
| |
Collapse
|
15
|
Oka A, Ishimura N, Ishihara S. A New Dawn for the Use of Artificial Intelligence in Gastroenterology, Hepatology and Pancreatology. Diagnostics (Basel) 2021; 11:1719. [PMID: 34574060 PMCID: PMC8468082 DOI: 10.3390/diagnostics11091719] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/17/2021] [Accepted: 09/17/2021] [Indexed: 12/15/2022] Open
Abstract
Artificial intelligence (AI) is rapidly becoming an essential tool in the medical field as well as in daily life. Recent developments in deep learning, a subfield of AI, have brought remarkable advances in image recognition, which facilitates improvement in the early detection of cancer by endoscopy, ultrasonography, and computed tomography. In addition, AI-assisted big data analysis represents a great step forward for precision medicine. This review provides an overview of AI technology, particularly for gastroenterology, hepatology, and pancreatology, to help clinicians utilize AI in the near future.
Collapse
Affiliation(s)
- Akihiko Oka
- Department of Internal Medicine II, Faculty of Medicine, Shimane University, Izumo 693-8501, Shimane, Japan; (N.I.); (S.I.)
| | | | | |
Collapse
|
16
|
Convolution neural network for the diagnosis of wireless capsule endoscopy: a systematic review and meta-analysis. Surg Endosc 2021; 36:16-31. [PMID: 34426876 PMCID: PMC8741689 DOI: 10.1007/s00464-021-08689-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 08/07/2021] [Indexed: 02/07/2023]
Abstract
Background Wireless capsule endoscopy (WCE) is considered to be a powerful instrument for the diagnosis of intestine diseases. Convolution neural network (CNN) is a type of artificial intelligence that has the potential to assist the detection of WCE images. We aimed to perform a systematic review of the current research progress to the CNN application in WCE. Methods A search in PubMed, SinoMed, and Web of Science was conducted to collect all original publications about CNN implementation in WCE. Assessment of the risk of bias was performed by Quality Assessment of Diagnostic Accuracy Studies-2 risk list. Pooled sensitivity and specificity were calculated by an exact binominal rendition of the bivariate mixed-effects regression model. I2 was used for the evaluation of heterogeneity. Results 16 articles with 23 independent studies were included. CNN application to WCE was divided into detection on erosion/ulcer, gastrointestinal bleeding (GI bleeding), and polyps/cancer. The pooled sensitivity of CNN for erosion/ulcer is 0.96 [95% CI 0.91, 0.98], for GI bleeding is 0.97 (95% CI 0.93–0.99), and for polyps/cancer is 0.97 (95% CI 0.82–0.99). The corresponding specificity of CNN for erosion/ulcer is 0.97 (95% CI 0.93–0.99), for GI bleeding is 1.00 (95% CI 0.99–1.00), and for polyps/cancer is 0.98 (95% CI 0.92–0.99). Conclusion Based on our meta-analysis, CNN-dependent diagnosis of erosion/ulcer, GI bleeding, and polyps/cancer approached a high-level performance because of its high sensitivity and specificity. Therefore, future perspective, CNN has the potential to become an important assistant for the diagnosis of WCE. Supplementary Information The online version contains supplementary material available at 10.1007/s00464-021-08689-3.
Collapse
|