1
|
Xie W, Hu J, Liang P, Mei Q, Wang A, Liu Q, Liu X, Wu J, Yang X, Zhu N, Bai B, Mei Y, Liang Z, Han W, Cheng M. Deep learning-based lesion detection and severity grading of small-bowel Crohn's disease ulcers on double-balloon endoscopy images. Gastrointest Endosc 2024; 99:767-777.e5. [PMID: 38065509 DOI: 10.1016/j.gie.2023.11.059] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 11/14/2023] [Accepted: 11/27/2023] [Indexed: 04/24/2024]
Abstract
BACKGROUND AND AIMS Double-balloon endoscopy (DBE) is widely used in diagnosing small-bowel Crohn's disease (CD). However, CD misdiagnosis frequently occurs if inexperienced endoscopists cannot accurately detect the lesions. The CD evaluation may also be inaccurate owing to the subjectivity of endoscopists. This study aimed to use artificial intelligence (AI) to accurately detect and objectively assess small-bowel CD for more refined disease management. METHODS We collected 28,155 small-bowel DBE images from 628 patients from January 2018 to December 2022. Four expert gastroenterologists labeled the images, and at least 2 endoscopists made the final decision with agreement. A state-of-the-art deep learning model, EfficientNet-b5, was trained to detect CD lesions and evaluate CD ulcers. The detection included lesions of ulcer, noninflammatory stenosis, and inflammatory stenosis. Ulcer grading included ulcerated surface, ulcer size, and ulcer depth. A comparison of AI model performance with endoscopists was performed. RESULTS The EfficientNet-b5 achieved high accuracies of 96.3% (95% confidence interval [CI], 95.7%-96.7%), 95.7% (95% CI, 95.1%-96.2%), and 96.7% (95% CI, 96.2%-97.2%) for the detection of ulcers, noninflammatory stenosis, and inflammatory stenosis, respectively. In ulcer grading, the EfficientNet-b5 exhibited average accuracies of 87.3% (95% CI, 84.6%-89.6%) for grading the ulcerated surface, 87.8% (95% CI, 85.0%-90.2%) for grading the size of ulcers, and 85.2% (95% CI, 83.2%-87.0%) for ulcer depth assessment. CONCLUSIONS The EfficientNet-b5 achieved high accuracy in detecting CD lesions and grading CD ulcers. The AI model can provide expert-level accuracy and objective evaluation of small-bowel CD to optimize the clinical treatment plans.
Collapse
Affiliation(s)
- Wanqing Xie
- Department of Intelligent Medical Engineering, School of Biomedical Engineering, Anhui Medical University, Hefei, China; Beth Israel Deaconess Medical Center, Harvard Medical School, Harvard University, Boston, Massachusetts, USA
| | - Jing Hu
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Pengcheng Liang
- Department of Intelligent Medical Engineering, School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Qiao Mei
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Aodi Wang
- Department of Intelligent Medical Engineering, School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Qiuyuan Liu
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiaofeng Liu
- Gordon Center for Medical Imaging, Harvard Medical School and Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Juan Wu
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiaodong Yang
- Department of General Surgery, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Nannan Zhu
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Bingqing Bai
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Yiqing Mei
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhen Liang
- Department of Intelligent Medical Engineering, School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Wei Han
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Mingmei Cheng
- Department of Intelligent Medical Engineering, School of Biomedical Engineering, Anhui Medical University, Hefei, China
| |
Collapse
|
2
|
Sridhar GR, Siva Prasad AV, Lakshmi G. Scope and caveats: Artificial intelligence in gastroenterology. Artif Intell Gastroenterol 2024; 5:91607. [DOI: 10.35712/aig.v5.i1.91607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 02/18/2024] [Accepted: 03/29/2024] [Indexed: 04/29/2024] Open
Abstract
The use of Artificial intelligence (AI) has evolved from its mid-20th century origins to playing a pivotal tool in modern medicine. It leverages digital data and computational hardware for diverse applications, including diagnosis, prognosis, and treatment responses in gastrointestinal and hepatic conditions. AI has had an impact in diagnostic techniques, particularly endoscopy, ultrasound, and histopathology. AI encompasses machine learning, natural language processing, and robotics, with machine learning being central. This involves sophisticated algorithms capable of managing complex datasets, far surpassing traditional statistical methods. These algorithms, both supervised and unsupervised, are integral for interpreting large datasets. In liver diseases, AI's non-invasive diagnostic applications, particularly in non-alcoholic fatty liver disease, and its role in characterizing hepatic lesions is promising. AI aids in distinguishing between normal and cirrhotic livers and improves the accuracy of lesion characterization and prognostication of hepatocellular carcinoma. AI enhances lesion identification during endoscopy, showing potential in the diagnosis and management of early-stage esophageal carcinoma. In peptic ulcer disease, AI technologies influence patient management strategies. AI is useful in colonoscopy, particularly in detecting smaller colonic polyps. However, its applicability in non-academic settings requires further validation. Addressing these issues is vital for harnessing the potential of AI. In conclusion, while AI offers transformative possibilities in gastroenterology, careful integration and balancing of technical possibilities with ethical and practical application, is essential for optimal use.
Collapse
Affiliation(s)
| | - Atmakuri V Siva Prasad
- Department of Gastroenterology, Institute of Gastroenterology, Visakhapatnam 530003, India
| | - Gumpeny Lakshmi
- Department of Internal Medicine, Gayatri Vidya Parishad Institute of Healthcare & Medical Technology, Visakhapatnam 530048, India
| |
Collapse
|
3
|
Yokote A, Umeno J, Kawasaki K, Fujioka S, Fuyuno Y, Matsuno Y, Yoshida Y, Imazu N, Miyazono S, Moriyama T, Kitazono T, Torisu T. Small bowel capsule endoscopy examination and open access database with artificial intelligence: The SEE-artificial intelligence project. DEN OPEN 2024; 4:e258. [PMID: 37359150 PMCID: PMC10288072 DOI: 10.1002/deo2.258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 05/31/2023] [Accepted: 06/05/2023] [Indexed: 06/28/2023]
Abstract
OBJECTIVES Artificial intelligence (AI) may be practical for image classification of small bowel capsule endoscopy (CE). However, creating a functional AI model is challenging. We attempted to create a dataset and an object detection CE AI model to explore modeling problems to assist in reading small bowel CE. METHODS We extracted 18,481 images from 523 small bowel CE procedures performed at Kyushu University Hospital from September 2014 to June 2021. We annotated 12,320 images with 23,033 disease lesions, combined them with 6161 normal images as the dataset, and examined the characteristics. Based on the dataset, we created an object detection AI model using YOLO v5 and we tested validation. RESULTS We annotated the dataset with 12 types of annotations, and multiple annotation types were observed in the same image. We test validated our AI model with 1396 images, and sensitivity for all 12 types of annotations was about 91%, with 1375 true positives, 659 false positives, and 120 false negatives detected. The highest sensitivity for individual annotations was 97%, and the highest area under the receiver operating characteristic curve was 0.98, but the quality of detection varied depending on the specific annotation. CONCLUSIONS Object detection AI model in small bowel CE using YOLO v5 may provide effective and easy-to-understand reading assistance. In this SEE-AI project, we open our dataset, the weights of the AI model, and a demonstration to experience our AI. We look forward to further improving the AI model in the future.
Collapse
Affiliation(s)
- Akihito Yokote
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Junji Umeno
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Keisuke Kawasaki
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Shin Fujioka
- Department of Endoscopic Diagnostics and Therapeutics Kyushu University Hospital Fukuoka Japan
| | - Yuta Fuyuno
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Yuichi Matsuno
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Yuichiro Yoshida
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Noriyuki Imazu
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Satoshi Miyazono
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Tomohiko Moriyama
- International Medical Department Kyushu University Hospital Fukuoka Japan
| | - Takanari Kitazono
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Takehiro Torisu
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| |
Collapse
|
4
|
Jiang B, Dorosan M, Leong JWH, Ong MEH, Lam SSW, Ang TL. Development and validation of a deep learning system for detection of small bowel pathologies in capsule endoscopy: a pilot study in a Singapore institution. Singapore Med J 2024; 65:133-140. [PMID: 38527297 PMCID: PMC11060635 DOI: 10.4103/singaporemedj.smj-2023-187] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/10/2023] [Indexed: 03/27/2024]
Abstract
INTRODUCTION Deep learning models can assess the quality of images and discriminate among abnormalities in small bowel capsule endoscopy (CE), reducing fatigue and the time needed for diagnosis. They serve as a decision support system, partially automating the diagnosis process by providing probability predictions for abnormalities. METHODS We demonstrated the use of deep learning models in CE image analysis, specifically by piloting a bowel preparation model (BPM) and an abnormality detection model (ADM) to determine frame-level view quality and the presence of abnormal findings, respectively. We used convolutional neural network-based models pretrained on large-scale open-domain data to extract spatial features of CE images that were then used in a dense feed-forward neural network classifier. We then combined the open-source Kvasir-Capsule dataset (n = 43) and locally collected CE data (n = 29). RESULTS Model performance was compared using averaged five-fold and two-fold cross-validation for BPMs and ADMs, respectively. The best BPM model based on a pre-trained ResNet50 architecture had an area under the receiver operating characteristic and precision-recall curves of 0.969±0.008 and 0.843±0.041, respectively. The best ADM model, also based on ResNet50, had top-1 and top-2 accuracies of 84.03±0.051 and 94.78±0.028, respectively. The models could process approximately 200-250 images per second and showed good discrimination on time-critical abnormalities such as bleeding. CONCLUSION Our pilot models showed the potential to improve time to diagnosis in CE workflows. To our knowledge, our approach is unique to the Singapore context. The value of our work can be further evaluated in a pragmatic manner that is sensitive to existing clinician workflow and resource constraints.
Collapse
Affiliation(s)
- Bochao Jiang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| | - Michael Dorosan
- Health Services Research Centre, Singapore Health Services Pte Ltd, Singapore
| | - Justin Wen Hao Leong
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| | - Marcus Eng Hock Ong
- Health Services and Systems Research, Duke-NUS Medical School, Singapore
- Department of Emergency Medicine, Singapore General Hospital, Singapore
| | - Sean Shao Wei Lam
- Health Services Research Centre, Singapore Health Services Pte Ltd, Singapore
| | - Tiing Leong Ang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| |
Collapse
|
5
|
Mota J, Almeida MJ, Mendes F, Martins M, Ribeiro T, Afonso J, Cardoso P, Cardoso H, Andrade P, Ferreira J, Mascarenhas M, Macedo G. From Data to Insights: How Is AI Revolutionizing Small-Bowel Endoscopy? Diagnostics (Basel) 2024; 14:291. [PMID: 38337807 PMCID: PMC10855436 DOI: 10.3390/diagnostics14030291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 01/09/2024] [Accepted: 01/16/2024] [Indexed: 02/12/2024] Open
Abstract
The role of capsule endoscopy and enteroscopy in managing various small-bowel pathologies is well-established. However, their broader application has been hampered mainly by their lengthy reading times. As a result, there is a growing interest in employing artificial intelligence (AI) in these diagnostic and therapeutic procedures, driven by the prospect of overcoming some major limitations and enhancing healthcare efficiency, while maintaining high accuracy levels. In the past two decades, the applicability of AI to gastroenterology has been increasing, mainly because of the strong imaging component. Nowadays, there are a multitude of studies using AI, specifically using convolutional neural networks, that prove the potential applications of AI to these endoscopic techniques, achieving remarkable results. These findings suggest that there is ample opportunity for AI to expand its presence in the management of gastroenterology diseases and, in the future, catalyze a game-changing transformation in clinical activities. This review provides an overview of the current state-of-the-art of AI in the scope of small-bowel study, with a particular focus on capsule endoscopy and enteroscopy.
Collapse
Affiliation(s)
- Joana Mota
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Maria João Almeida
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Francisco Mendes
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Miguel Martins
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Tiago Ribeiro
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - João Afonso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Helder Cardoso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Patrícia Andrade
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - João Ferreira
- Department of Mechanical Engineering, Faculty of Engineering, University of Porto, R. Dr. Roberto Frias, 4200-465 Porto, Portugal;
- Digestive Artificial Intelligence Development, R. Alfredo Allen 455-461, 4200-135 Porto, Portugal
| | - Miguel Mascarenhas
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- ManopH Gastroenterology Clinic, R. de Sá da Bandeira 752, 4000-432 Porto, Portugal
| | - Guilherme Macedo
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| |
Collapse
|
6
|
Zhu Y, Lyu X, Tao X, Wu L, Yin A, Liao F, Hu S, Wang Y, Zhang M, Huang L, Wang J, Zhang C, Gong D, Jiang X, Zhao L, Yu H. A newly developed deep learning-based system for automatic detection and classification of small bowel lesions during double-balloon enteroscopy examination. BMC Gastroenterol 2024; 24:10. [PMID: 38166722 PMCID: PMC10759410 DOI: 10.1186/s12876-023-03067-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Accepted: 11/28/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND Double-balloon enteroscopy (DBE) is a standard method for diagnosing and treating small bowel disease. However, DBE may yield false-negative results due to oversight or inexperience. We aim to develop a computer-aided diagnostic (CAD) system for the automatic detection and classification of small bowel abnormalities in DBE. DESIGN AND METHODS A total of 5201 images were collected from Renmin Hospital of Wuhan University to construct a detection model for localizing lesions during DBE, and 3021 images were collected to construct a classification model for classifying lesions into four classes, protruding lesion, diverticulum, erosion & ulcer and angioectasia. The performance of the two models was evaluated using 1318 normal images and 915 abnormal images and 65 videos from independent patients and then compared with that of 8 endoscopists. The standard answer was the expert consensus. RESULTS For the image test set, the detection model achieved a sensitivity of 92% (843/915) and an area under the curve (AUC) of 0.947, and the classification model achieved an accuracy of 86%. For the video test set, the accuracy of the system was significantly better than that of the endoscopists (85% vs. 77 ± 6%, p < 0.01). For the video test set, the proposed system was superior to novices and comparable to experts. CONCLUSIONS We established a real-time CAD system for detecting and classifying small bowel lesions in DBE with favourable performance. ENDOANGEL-DBE has the potential to help endoscopists, especially novices, in clinical practice and may reduce the miss rate of small bowel lesions.
Collapse
Affiliation(s)
- Yijie Zhu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiaoguang Lyu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiao Tao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lianlian Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Anning Yin
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Fei Liao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Shan Hu
- School of Computer Science, Wuhan University, Wuhan, China
| | - Yang Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Mengjiao Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Li Huang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Junxiao Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Chenxia Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Dexin Gong
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiaoda Jiang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Liang Zhao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China.
| | - Honggang Yu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China.
| |
Collapse
|
7
|
Ukashi O, Soffer S, Klang E, Eliakim R, Ben-Horin S, Kopylov U. Capsule Endoscopy in Inflammatory Bowel Disease: Panenteric Capsule Endoscopy and Application of Artificial Intelligence. Gut Liver 2023; 17:516-528. [PMID: 37305947 PMCID: PMC10352070 DOI: 10.5009/gnl220507] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/23/2023] [Accepted: 01/30/2023] [Indexed: 06/13/2023] Open
Abstract
Video capsule endoscopy (VCE) of the small-bowel has been proven to accurately diagnose small-bowel inflammation and to predict future clinical flares among patients with Crohn's disease (CD). In 2017, the panenteric capsule (PillCam Crohn's system) was introduced for the first time, enabling a reliable evaluation of the whole small and large intestines. The great advantage of visualization of both parts of the gastrointestinal tract in a feasible and single procedure, holds a significant promise for patients with CD, enabling determination of the disease extent and severity, and potentially optimize disease management. In recent years, applications of machine learning, for VCE have been well studied, demonstrating impressive performance and high accuracy for the detection of various gastrointestinal pathologies, among them inflammatory bowel disease lesions. The use of artificial neural network models has been proven to accurately detect/classify and grade CD lesions, and shorten the VCE reading time, resulting in a less tedious process with a potential to minimize missed diagnosis and better predict clinical outcomes. Nevertheless, prospective, and real-world studies are essential to precisely examine artificial intelligence applications in real-life inflammatory bowel disease practice.
Collapse
Affiliation(s)
- Offir Ukashi
- Gastroenterology Institute, Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
- Department of Internal Medicine A, Sheba Medical Center, Tel Hashomer, Israel
| | - Shelly Soffer
- Deep Vision Lab, Sheba Medical Center, Tel Hashomer, Israel
- Internal Medicine B, Assuta Medical Center, Ashdod, Israel
- Faculty of Health Sciences, Ben Gurion University of the Negev, Beer Sheva, Israel
| | - Eyal Klang
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
- Deep Vision Lab, Sheba Medical Center, Tel Hashomer, Israel
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel
| | - Rami Eliakim
- Gastroenterology Institute, Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Shomron Ben-Horin
- Gastroenterology Institute, Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Uri Kopylov
- Gastroenterology Institute, Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
8
|
Chung J, Oh DJ, Park J, Kim SH, Lim YJ. Automatic Classification of GI Organs in Wireless Capsule Endoscopy Using a No-Code Platform-Based Deep Learning Model. Diagnostics (Basel) 2023; 13:diagnostics13081389. [PMID: 37189489 DOI: 10.3390/diagnostics13081389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 04/03/2023] [Accepted: 04/10/2023] [Indexed: 05/17/2023] Open
Abstract
The first step in reading a capsule endoscopy (CE) is determining the gastrointestinal (GI) organ. Because CE produces too many inappropriate and repetitive images, automatic organ classification cannot be directly applied to CE videos. In this study, we developed a deep learning algorithm to classify GI organs (the esophagus, stomach, small bowel, and colon) using a no-code platform, applied it to CE videos, and proposed a novel method to visualize the transitional area of each GI organ. We used training data (37,307 images from 24 CE videos) and test data (39,781 images from 30 CE videos) for model development. This model was validated using 100 CE videos that included "normal", "blood", "inflamed", "vascular", and "polypoid" lesions. Our model achieved an overall accuracy of 0.98, precision of 0.89, recall of 0.97, and F1 score of 0.92. When we validated this model relative to the 100 CE videos, it produced average accuracies for the esophagus, stomach, small bowel, and colon of 0.98, 0.96, 0.87, and 0.87, respectively. Increasing the AI score's cut-off improved most performance metrics in each organ (p < 0.05). To locate a transitional area, we visualized the predicted results over time, and setting the cut-off of the AI score to 99.9% resulted in a better intuitive presentation than the baseline. In conclusion, the GI organ classification AI model demonstrated high accuracy on CE videos. The transitional area could be more easily located by adjusting the cut-off of the AI score and visualization of its result over time.
Collapse
Affiliation(s)
- Joowon Chung
- Department of Internal Medicine, Nowon Eulji Medical Center, Eulji University School of Medicine, Seoul 01830, Republic of Korea
| | - Dong Jun Oh
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea
| | - Junseok Park
- Department of Internal Medicine, Digestive Disease Center, Institute for Digestive Research, Soonchunhyang University College of Medicine, Seoul 04401, Republic of Korea
| | - Su Hwan Kim
- Department of Internal Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea
| |
Collapse
|
9
|
Chu Y, Huang F, Gao M, Zou DW, Zhong J, Wu W, Wang Q, Shen XN, Gong TT, Li YY, Wang LF. Convolutional neural network-based segmentation network applied to image recognition of angiodysplasias lesion under capsule endoscopy. World J Gastroenterol 2023; 29:879-889. [PMID: 36816625 PMCID: PMC9932427 DOI: 10.3748/wjg.v29.i5.879] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/26/2022] [Accepted: 01/12/2023] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Small intestinal vascular malformations (angiodysplasias) are common causes of small intestinal bleeding. While capsule endoscopy has become the primary diagnostic method for angiodysplasia, manual reading of the entire gastrointestinal tract is time-consuming and requires a heavy workload, which affects the accuracy of diagnosis.
AIM To evaluate whether artificial intelligence can assist the diagnosis and increase the detection rate of angiodysplasias in the small intestine, achieve automatic disease detection, and shorten the capsule endoscopy (CE) reading time.
METHODS A convolutional neural network semantic segmentation model with a feature fusion method, which automatically recognizes the category of vascular dysplasia under CE and draws the lesion contour, thus improving the efficiency and accuracy of identifying small intestinal vascular malformation lesions, was proposed. Resnet-50 was used as the skeleton network to design the fusion mechanism, fuse the shallow and depth features, and classify the images at the pixel level to achieve the segmentation and recognition of vascular dysplasia. The training set and test set were constructed and compared with PSPNet, Deeplab3+, and UperNet.
RESULTS The test set constructed in the study achieved satisfactory results, where pixel accuracy was 99%, mean intersection over union was 0.69, negative predictive value was 98.74%, and positive predictive value was 94.27%. The model parameter was 46.38 M, the float calculation was 467.2 G, and the time length to segment and recognize a picture was 0.6 s.
CONCLUSION Constructing a segmentation network based on deep learning to segment and recognize angiodysplasias lesions is an effective and feasible method for diagnosing angiodysplasias lesions.
Collapse
Affiliation(s)
- Ye Chu
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Fang Huang
- Technology Platform Department, Jinshan Science & Technology (Group) Co., Ltd., Chongqing 401120, China
| | - Min Gao
- Technology Platform Department, Jinshan Science & Technology (Group) Co., Ltd., Chongqing 401120, China
| | - Duo-Wu Zou
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Jie Zhong
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Wei Wu
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Qi Wang
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Xiao-Nan Shen
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Ting-Ting Gong
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Yuan-Yi Li
- Technology Platform Department, Jinshan Science & Technology (Group) Co., Ltd., Chongqing 401120, China
| | - Li-Fu Wang
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| |
Collapse
|
10
|
Ding Z, Shi H, Zhang H, Zhang H, Tian S, Zhang K, Cai S, Ming F, Xie X, Liu J, Lin R. Artificial intelligence-based diagnosis of abnormalities in small-bowel capsule endoscopy. Endoscopy 2023; 55:44-51. [PMID: 35931065 DOI: 10.1055/a-1881-4209] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
BACKGROUND : Further development of deep learning-based artificial intelligence (AI) technology to automatically diagnose multiple abnormalities in small-bowel capsule endoscopy (SBCE) videos is necessary. We aimed to develop an AI model, to compare its diagnostic performance with doctors of different experience levels, and to further evaluate its auxiliary role for doctors in diagnosing multiple abnormalities in SBCE videos. METHODS : The AI model was trained using 280 426 images from 2565 patients, and the diagnostic performance was validated in 240 videos. RESULTS : The sensitivity of the AI model for red spots, inflammation, blood content, vascular lesions, protruding lesions, parasites, diverticulum, and normal variants was 97.8 %, 96.1 %, 96.1 %, 94.7 %, 95.6 %, 100 %, 100 %, and 96.4 %, respectively. The specificity was 86.0 %, 75.3 %, 87.3 %, 77.8 %, 67.7 %, 97.5 %, 91.2 %, and 81.3 %, respectively. The accuracy was 95.0 %, 88.8 %, 89.2 %, 79.2 %, 80.8 %, 97.5 %, 91.3 %, and 93.3 %, respectively. For junior doctors, the assistance of the AI model increased the overall accuracy from 85.5 % to 97.9 % (P < 0.001, Bonferroni corrected), comparable to that of experts (96.6 %, P > 0.0125, Bonferroni corrected). CONCLUSIONS : This well-trained AI diagnostic model automatically diagnosed multiple small-bowel abnormalities simultaneously based on video-level recognition, with potential as an excellent auxiliary system for less-experienced endoscopists.
Collapse
Affiliation(s)
- Zhen Ding
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Huiying Shi
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hang Zhang
- Ankon Technologies (Wuhan) Co., Ltd, Wuhan, China
| | - Hao Zhang
- Ankon Technologies (Wuhan) Co., Ltd, Wuhan, China
| | - Shuxin Tian
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Department of Gastroenterology, the First Affiliated Hospital of Shihezi University School of Medicine, Shihezi 832008, China
| | - Kun Zhang
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Sicheng Cai
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Fanhua Ming
- Ankon Technologies (Wuhan) Co., Ltd, Wuhan, China
| | - Xiaoping Xie
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jun Liu
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Rong Lin
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
11
|
Parkash O, Siddiqui ATS, Jiwani U, Rind F, Padhani ZA, Rizvi A, Hoodbhoy Z, Das JK. Diagnostic accuracy of artificial intelligence for detecting gastrointestinal luminal pathologies: A systematic review and meta-analysis. Front Med (Lausanne) 2022; 9:1018937. [PMID: 36405592 PMCID: PMC9672666 DOI: 10.3389/fmed.2022.1018937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/03/2022] [Indexed: 11/06/2022] Open
Abstract
Background Artificial Intelligence (AI) holds considerable promise for diagnostics in the field of gastroenterology. This systematic review and meta-analysis aims to assess the diagnostic accuracy of AI models compared with the gold standard of experts and histopathology for the diagnosis of various gastrointestinal (GI) luminal pathologies including polyps, neoplasms, and inflammatory bowel disease. Methods We searched PubMed, CINAHL, Wiley Cochrane Library, and Web of Science electronic databases to identify studies assessing the diagnostic performance of AI models for GI luminal pathologies. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. We performed a meta-analysis and hierarchical summary receiver operating characteristic curves (HSROC). The risk of bias was assessed using Quality Assessment for Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Subgroup analyses were conducted based on the type of GI luminal disease, AI model, reference standard, and type of data used for analysis. This study is registered with PROSPERO (CRD42021288360). Findings We included 73 studies, of which 31 were externally validated and provided sufficient information for inclusion in the meta-analysis. The overall sensitivity of AI for detecting GI luminal pathologies was 91.9% (95% CI: 89.0–94.1) and specificity was 91.7% (95% CI: 87.4–94.7). Deep learning models (sensitivity: 89.8%, specificity: 91.9%) and ensemble methods (sensitivity: 95.4%, specificity: 90.9%) were the most commonly used models in the included studies. Majority of studies (n = 56, 76.7%) had a high risk of selection bias while 74% (n = 54) studies were low risk on reference standard and 67% (n = 49) were low risk for flow and timing bias. Interpretation The review suggests high sensitivity and specificity of AI models for the detection of GI luminal pathologies. There is a need for large, multi-center trials in both high income countries and low- and middle- income countries to assess the performance of these AI models in real clinical settings and its impact on diagnosis and prognosis. Systematic review registration [https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=288360], identifier [CRD42021288360].
Collapse
Affiliation(s)
- Om Parkash
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | | | - Uswa Jiwani
- Center of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Fahad Rind
- Head and Neck Oncology, The Ohio State University, Columbus, OH, United States
| | - Zahra Ali Padhani
- Institute for Global Health and Development, Aga Khan University, Karachi, Pakistan
| | - Arjumand Rizvi
- Center of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Zahra Hoodbhoy
- Department of Pediatrics and Child Health, Aga Khan University, Karachi, Pakistan
| | - Jai K. Das
- Institute for Global Health and Development, Aga Khan University, Karachi, Pakistan
- Department of Pediatrics and Child Health, Aga Khan University, Karachi, Pakistan
- *Correspondence: Jai K. Das,
| |
Collapse
|
12
|
Deep Learning Multi-Domain Model Provides Accurate Detection and Grading of Mucosal Ulcers in Different Capsule Endoscopy Types. Diagnostics (Basel) 2022; 12:diagnostics12102490. [PMID: 36292178 PMCID: PMC9600959 DOI: 10.3390/diagnostics12102490] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 10/06/2022] [Accepted: 10/08/2022] [Indexed: 11/17/2022] Open
Abstract
Background and Aims: The aim of our study was to create an accurate patient-level combined algorithm for the identification of ulcers on CE images from two different capsules. Methods: We retrospectively collected CE images from PillCam-SB3′s capsule and PillCam-Crohn’s capsule. ML algorithms were trained to classify small bowel CE images into either normal or ulcerated mucosa: a separate model for each capsule type, a cross-domain model (training the model on one capsule type and testing on the other), and a combined model. Results: The dataset included 33,100 CE images: 20,621 PillCam-SB3 images and 12,479 PillCam-Crohn’s images, of which 3582 were colonic images. There were 15,684 normal mucosa images and 17,416 ulcerated mucosa images. While the separate model for each capsule type achieved excellent accuracy (average AUC 0.95 and 0.98, respectively), the cross-domain model achieved a wide range of accuracies (0.569–0.88) with an AUC of 0.93. The combined model achieved the best results with an average AUC of 0.99 and average mean patient accuracy of 0.974. Conclusions: A combined model for two different capsules provided high and consistent diagnostic accuracy. Creating a holistic AI model for automated capsule reading is an essential part of the refinement required in ML models on the way to adapting them to clinical practice.
Collapse
|
13
|
Chetcuti Zammit S, Sidhu R. Artificial intelligence within the small bowel: are we lagging behind? Curr Opin Gastroenterol 2022; 38:307-317. [PMID: 35645023 DOI: 10.1097/mog.0000000000000827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
PURPOSE OF REVIEW The use of artificial intelligence in small bowel capsule endoscopy is expanding. This review focusses on the use of artificial intelligence for small bowel pathology compared with human data and developments to date. RECENT FINDINGS The diagnosis and management of small bowel disease has been revolutionized with the advent of capsule endoscopy. Reading of capsule endoscopy videos however is time consuming with an average reading time of 40 min. Furthermore, the fatigued human eye may miss subtle lesions including indiscreet mucosal bulges. In recent years, artificial intelligence has made significant progress in the field of medicine including gastroenterology. Machine learning has enabled feature extraction and in combination with deep neural networks, image classification has now materialized for routine endoscopy for the clinician. SUMMARY Artificial intelligence is in built within the Navicam-Ankon capsule endoscopy reading system. This development will no doubt expand to other capsule endoscopy platforms and capsule endoscopies that are used to visualize other parts of the gastrointestinal tract as a standard. This wireless and patient friendly technique combined with rapid reading platforms with the help of artificial intelligence will become an attractive and viable choice to alter how patients are investigated in the future.
Collapse
Affiliation(s)
| | - Reena Sidhu
- Academic Department of Gastroenterology, Royal Hallamshire Hospital
- Academic Unit of Gastroenterology, Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
14
|
Zhao PY, Han K, Yao RQ, Ren C, Du XH. Application Status and Prospects of Artificial Intelligence in Peptic Ulcers. Front Surg 2022; 9:894775. [PMID: 35784921 PMCID: PMC9244632 DOI: 10.3389/fsurg.2022.894775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 05/31/2022] [Indexed: 02/05/2023] Open
Abstract
Peptic ulcer (PU) is a common and frequently occurring disease. Although PU seriously threatens the lives and health of global residents, the applications of artificial intelligence (AI) have strongly promoted diversification and modernization in the diagnosis and treatment of PU. This minireview elaborates on the research progress of AI in the field of PU, from PU's pathogenic factor Helicobacter pylori (Hp) infection, diagnosis and differential diagnosis, to its management and complications (bleeding, obstruction, perforation and canceration). Finally, the challenges and prospects of AI application in PU are prospected and expounded. With the in-depth understanding of modern medical technology, AI remains a promising option in the management of PU patients and plays a more indispensable role. How to realize the robustness, versatility and diversity of multifunctional AI systems in PU and conduct multicenter prospective clinical research as soon as possible are the top priorities in the future.
Collapse
Affiliation(s)
- Peng-yue Zhao
- Department of General Surgery, First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Ke Han
- Department of Gastroenterology, First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Ren-qi Yao
- Translational Medicine Research Center, Medical Innovation Research Division and Fourth Medical Center of the Chinese PLA General Hospital, Beijing, China
- Correspondence: Xiao-hui Du Chao Ren Ren-qi Yao
| | - Chao Ren
- Department of Pulmonary and Critical Care Medicine, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
- Correspondence: Xiao-hui Du Chao Ren Ren-qi Yao
| | - Xiao-hui Du
- Department of General Surgery, First Medical Center of the Chinese PLA General Hospital, Beijing, China
- Correspondence: Xiao-hui Du Chao Ren Ren-qi Yao
| |
Collapse
|
15
|
Bang CS, Lee JJ, Baik GH. Computer-Aided Diagnosis of Gastrointestinal Ulcer and Hemorrhage Using Wireless Capsule Endoscopy: Systematic Review and Diagnostic Test Accuracy Meta-analysis. J Med Internet Res 2021; 23:e33267. [PMID: 34904949 PMCID: PMC8715364 DOI: 10.2196/33267] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 10/10/2021] [Accepted: 10/13/2021] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Interpretation of capsule endoscopy images or movies is operator-dependent and time-consuming. As a result, computer-aided diagnosis (CAD) has been applied to enhance the efficacy and accuracy of the review process. Two previous meta-analyses reported the diagnostic performance of CAD models for gastrointestinal ulcers or hemorrhage in capsule endoscopy. However, insufficient systematic reviews have been conducted, which cannot determine the real diagnostic validity of CAD models. OBJECTIVE To evaluate the diagnostic test accuracy of CAD models for gastrointestinal ulcers or hemorrhage using wireless capsule endoscopic images. METHODS We conducted core databases searching for studies based on CAD models for the diagnosis of ulcers or hemorrhage using capsule endoscopy and presenting data on diagnostic performance. Systematic review and diagnostic test accuracy meta-analysis were performed. RESULTS Overall, 39 studies were included. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of ulcers (or erosions) were .97 (95% confidence interval, .95-.98), .93 (.89-.95), .92 (.89-.94), and 138 (79-243), respectively. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of hemorrhage (or angioectasia) were .99 (.98-.99), .96 (.94-0.97), .97 (.95-.99), and 888 (343-2303), respectively. Subgroup analyses showed robust results. Meta-regression showed that published year, number of training images, and target disease (ulcers vs erosions, hemorrhage vs angioectasia) was found to be the source of heterogeneity. No publication bias was detected. CONCLUSIONS CAD models showed high performance for the optical diagnosis of gastrointestinal ulcer and hemorrhage in wireless capsule endoscopy.
Collapse
Affiliation(s)
- Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea.,Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Republic of Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Republic of Korea.,Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea
| |
Collapse
|
16
|
O’Hara F, McNamara D. Small-Bowel Capsule Endoscopy-Optimizing Capsule Endoscopy in Clinical Practice. Diagnostics (Basel) 2021; 11:2139. [PMID: 34829486 PMCID: PMC8623858 DOI: 10.3390/diagnostics11112139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/03/2021] [Accepted: 11/09/2021] [Indexed: 11/21/2022] Open
Abstract
The small bowel is the longest organ within the gastrointestinal tract. The emergence of small bowel capsule endoscopy (SBCE) over the last 20 years has revolutionized the investigation and diagnosis of small bowel pathology. Its utility as a non-invasive and well-tolerated procedure, which can be performed in an outpatient setting, has made it a valuable diagnostic tool. The indications for SBCE include obscure gastrointestinal bleeding, small bowel Crohn's disease, and, less frequently for screening in polyposis syndromes, celiac disease, or other small bowel pathology. Currently, there are several small bowel capsules on the market from different manufacturers; however, they share many technological features. The European Society of Gastrointestinal Endoscopy (ESGE) only recently developed a set of key quality indicators to guide quality standards in this area. Many of the technical aspects of capsule endoscopy still feature a degree of uncertainty in terms of optimal performance. Incomplete studies due to slow transit through the bowel, poor imaging secondary to poor preparation, and the risk of capsule retention remain frustrations in its clinical utility. Capsule review is a time-consuming process; however, artificial intelligence and machine learning offer opportunities to improve this. This narrative review examines our current standing in a number of these aspects and the potential to further the application of SBCE in order to maximize its diagnostic utility.
Collapse
Affiliation(s)
- Fintan O’Hara
- Department of Gastroenterology, Tallaght University Hospital, D24 NR0A Dublin, Ireland;
- TAGG Research Centre, School of Medicine, Trinity College, D24 NR0A Dublin, Ireland
| | - Deirdre McNamara
- Department of Gastroenterology, Tallaght University Hospital, D24 NR0A Dublin, Ireland;
- TAGG Research Centre, School of Medicine, Trinity College, D24 NR0A Dublin, Ireland
| |
Collapse
|
17
|
Kröner PT, Engels MML, Glicksberg BS, Johnson KW, Mzaik O, van Hooft JE, Wallace MB, El-Serag HB, Krittanawong C. Artificial intelligence in gastroenterology: A state-of-the-art review. World J Gastroenterol 2021; 27:6794-6824. [PMID: 34790008 PMCID: PMC8567482 DOI: 10.3748/wjg.v27.i40.6794] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/15/2021] [Accepted: 09/16/2021] [Indexed: 02/06/2023] Open
Abstract
The development of artificial intelligence (AI) has increased dramatically in the last 20 years, with clinical applications progressively being explored for most of the medical specialties. The field of gastroenterology and hepatology, substantially reliant on vast amounts of imaging studies, is not an exception. The clinical applications of AI systems in this field include the identification of premalignant or malignant lesions (e.g., identification of dysplasia or esophageal adenocarcinoma in Barrett’s esophagus, pancreatic malignancies), detection of lesions (e.g., polyp identification and classification, small-bowel bleeding lesion on capsule endoscopy, pancreatic cystic lesions), development of objective scoring systems for risk stratification, predicting disease prognosis or treatment response [e.g., determining survival in patients post-resection of hepatocellular carcinoma), determining which patients with inflammatory bowel disease (IBD) will benefit from biologic therapy], or evaluation of metrics such as bowel preparation score or quality of endoscopic examination. The objective of this comprehensive review is to analyze the available AI-related studies pertaining to the entirety of the gastrointestinal tract, including the upper, middle and lower tracts; IBD; the hepatobiliary system; and the pancreas, discussing the findings and clinical applications, as well as outlining the current limitations and future directions in this field.
Collapse
Affiliation(s)
- Paul T Kröner
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Megan ML Engels
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Cancer Center Amsterdam, Department of Gastroenterology and Hepatology, Amsterdam UMC, Location AMC, Amsterdam 1105, The Netherlands
| | - Benjamin S Glicksberg
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Kipp W Johnson
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Obaie Mzaik
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Jeanin E van Hooft
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, Amsterdam 2300, The Netherlands
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Division of Gastroenterology and Hepatology, Sheikh Shakhbout Medical City, Abu Dhabi 11001, United Arab Emirates
| | - Hashem B El-Serag
- Section of Gastroenterology and Hepatology, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
| | - Chayakrit Krittanawong
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Cardiology, Michael E. DeBakey VA Medical Center, Houston, TX 77030, United States
| |
Collapse
|
18
|
Artificial Intelligence in Capsule Endoscopy: A Practical Guide to Its Past and Future Challenges. Diagnostics (Basel) 2021; 11:diagnostics11091722. [PMID: 34574063 PMCID: PMC8469774 DOI: 10.3390/diagnostics11091722] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 12/20/2022] Open
Abstract
Artificial intelligence (AI) has revolutionized the medical diagnostic process of various diseases. Since the manual reading of capsule endoscopy videos is a time-intensive, error-prone process, computerized algorithms have been introduced to automate this process. Over the past decade, the evolution of convolutional neural network (CNN) enabled AI to detect multiple lesions simultaneously with increasing accuracy and sensitivity. Difficulty in validating CNN performance and unique characteristics of capsule endoscopy images make computer-aided reading systems in capsule endoscopy still on a preclinical level. Although AI technology can be used as an auxiliary second observer in capsule endoscopy, it is expected that in the near future, it will effectively reduce the reading time and ultimately become an independent, integrated reading system.
Collapse
|
19
|
Convolution neural network for the diagnosis of wireless capsule endoscopy: a systematic review and meta-analysis. Surg Endosc 2021; 36:16-31. [PMID: 34426876 PMCID: PMC8741689 DOI: 10.1007/s00464-021-08689-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 08/07/2021] [Indexed: 02/07/2023]
Abstract
Background Wireless capsule endoscopy (WCE) is considered to be a powerful instrument for the diagnosis of intestine diseases. Convolution neural network (CNN) is a type of artificial intelligence that has the potential to assist the detection of WCE images. We aimed to perform a systematic review of the current research progress to the CNN application in WCE. Methods A search in PubMed, SinoMed, and Web of Science was conducted to collect all original publications about CNN implementation in WCE. Assessment of the risk of bias was performed by Quality Assessment of Diagnostic Accuracy Studies-2 risk list. Pooled sensitivity and specificity were calculated by an exact binominal rendition of the bivariate mixed-effects regression model. I2 was used for the evaluation of heterogeneity. Results 16 articles with 23 independent studies were included. CNN application to WCE was divided into detection on erosion/ulcer, gastrointestinal bleeding (GI bleeding), and polyps/cancer. The pooled sensitivity of CNN for erosion/ulcer is 0.96 [95% CI 0.91, 0.98], for GI bleeding is 0.97 (95% CI 0.93–0.99), and for polyps/cancer is 0.97 (95% CI 0.82–0.99). The corresponding specificity of CNN for erosion/ulcer is 0.97 (95% CI 0.93–0.99), for GI bleeding is 1.00 (95% CI 0.99–1.00), and for polyps/cancer is 0.98 (95% CI 0.92–0.99). Conclusion Based on our meta-analysis, CNN-dependent diagnosis of erosion/ulcer, GI bleeding, and polyps/cancer approached a high-level performance because of its high sensitivity and specificity. Therefore, future perspective, CNN has the potential to become an important assistant for the diagnosis of WCE. Supplementary Information The online version contains supplementary material available at 10.1007/s00464-021-08689-3.
Collapse
|
20
|
A Current and Newly Proposed Artificial Intelligence Algorithm for Reading Small Bowel Capsule Endoscopy. Diagnostics (Basel) 2021; 11:diagnostics11071183. [PMID: 34209948 PMCID: PMC8306692 DOI: 10.3390/diagnostics11071183] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 06/26/2021] [Accepted: 06/28/2021] [Indexed: 12/09/2022] Open
Abstract
Small bowel capsule endoscopy (SBCE) is one of the most useful methods for diagnosing small bowel mucosal lesions. However, it takes a long time to interpret the capsule images. To solve this problem, artificial intelligence (AI) algorithms for SBCE readings are being actively studied. In this article, we analyzed several studies that applied AI algorithms to SBCE readings, such as automatic lesion detection, automatic classification of bowel cleanliness, and automatic compartmentalization of small bowels. In addition to automatic lesion detection using AI algorithms, a new direction of AI algorithms related to shorter reading times and improved lesion detection accuracy should be considered. Therefore, it is necessary to develop an integrated AI algorithm composed of algorithms with various functions in order to be used in clinical practice.
Collapse
|
21
|
Hsiao YJ, Wen YC, Lai WY, Lin YY, Yang YP, Chien Y, Yarmishyn AA, Hwang DK, Lin TC, Chang YC, Lin TY, Chang KJ, Chiou SH, Jheng YC. Application of artificial intelligence-driven endoscopic screening and diagnosis of gastric cancer. World J Gastroenterol 2021; 27:2979-2993. [PMID: 34168402 PMCID: PMC8192292 DOI: 10.3748/wjg.v27.i22.2979] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 03/10/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
The landscape of gastrointestinal endoscopy continues to evolve as new technologies and techniques become available. The advent of image-enhanced and magnifying endoscopies has highlighted the step toward perfecting endoscopic screening and diagnosis of gastric lesions. Simultaneously, with the development of convolutional neural network, artificial intelligence (AI) has made unprecedented breakthroughs in medical imaging, including the ongoing trials of computer-aided detection of colorectal polyps and gastrointestinal bleeding. In the past demi-decade, applications of AI systems in gastric cancer have also emerged. With AI’s efficient computational power and learning capacities, endoscopists can improve their diagnostic accuracies and avoid the missing or mischaracterization of gastric neoplastic changes. So far, several AI systems that incorporated both traditional and novel endoscopy technologies have been developed for various purposes, with most systems achieving an accuracy of more than 80%. However, their feasibility, effectiveness, and safety in clinical practice remain to be seen as there have been no clinical trials yet. Nonetheless, AI-assisted endoscopies shed light on more accurate and sensitive ways for early detection, treatment guidance and prognosis prediction of gastric lesions. This review summarizes the current status of various AI applications in gastric cancer and pinpoints directions for future research and clinical practice implementation from a clinical perspective.
Collapse
Affiliation(s)
- Yu-Jer Hsiao
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Yuan-Chih Wen
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Medical Education, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | - Wei-Yi Lai
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Pharmacology, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Yi-Ying Lin
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Pharmacology, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Yi-Ping Yang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Internal Medicine, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Critical Center, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | - Yueh Chien
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | | | - De-Kuang Hwang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Institute of Clinical Medicine, National Yang-Ming Chiao Tung University, Taipei 112201, Taiwan
| | - Tai-Chi Lin
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Institute of Clinical Medicine, National Yang-Ming Chiao Tung University, Taipei 112201, Taiwan
| | - Yun-Chia Chang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | - Ting-Yi Lin
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Department of Medicine, Kaohsiung Medical University, Kaohsiung 80708, Taiwan
| | - Kao-Jung Chang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Clinical Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Shih-Hwa Chiou
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Institute of Pharmacology, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Clinical Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Ying-Chun Jheng
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Big Data Center, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| |
Collapse
|
22
|
3D-semantic segmentation and classification of stomach infections using uncertainty aware deep neural networks. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00328-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
AbstractWireless capsule endoscopy (WCE) might move through human body and captures the small bowel and captures the video and require the analysis of all frames of video due to which the diagnosis of gastrointestinal infections by the physician is a tedious task. This tiresome assignment has fuelled the researcher’s efforts to present an automated technique for gastrointestinal infections detection. The segmentation of stomach infections is a challenging task because the lesion region having low contrast and irregular shape and size. To handle this challenging task, in this research work a new deep semantic segmentation model is suggested for 3D-segmentation of the different types of stomach infections. In the segmentation model, deep labv3 is employed as a backbone of the ResNet-50 model. The model is trained with ground-masks and accurately performs pixel-wise classification in the testing phase. Similarity among the different types of stomach lesions accurate classification is a difficult task, which is addressed in this reported research by extracting deep features from global input images using a pre-trained ResNet-50 model. Furthermore, the latest advances in the estimation of uncertainty and model interpretability in the classification of different types of stomach infections is presented. The classification results estimate uncertainty related to the vital features in input and show how uncertainty and interpretability might be modeled in ResNet-50 for the classification of the different types of stomach infections. The proposed model achieved up to 90% prediction scores to authenticate the method performance.
Collapse
|
23
|
Trasolini R, Byrne MF. Artificial intelligence and deep learning for small bowel capsule endoscopy. Dig Endosc 2021; 33:290-297. [PMID: 33211357 DOI: 10.1111/den.13896] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 11/16/2020] [Indexed: 12/20/2022]
Abstract
Capsule endoscopy is ideally suited to artificial intelligence-based interpretation given its reliance on pattern recognition in still images. Time saving viewing modes and lesion detection features currently available rely on machine learning algorithms, a form of artificial intelligence. Current software necessitates close human supervision given poor sensitivity relative to an expert reader. However, with the advent of deep learning, artificial intelligence is becoming increasingly reliable and will be increasingly relied upon. We review the major advances in artificial intelligence for capsule endoscopy in recent publications and briefly review artificial intelligence development for historical understanding. Importantly, recent advancements in artificial intelligence have not yet been incorporated into practice and it is immature to judge the potential of this technology based on current platforms. Remaining regulatory and standardization hurdles are being overcome and artificial intelligence-based clinical applications are likely to proliferate rapidly.
Collapse
Affiliation(s)
- Roberto Trasolini
- Department of Medicine, The University of British Columbia, Vancouver, Canada
| | - Michael F Byrne
- Department of Medicine, The University of British Columbia, Vancouver, Canada
| |
Collapse
|
24
|
Dray X, Iakovidis D, Houdeville C, Jover R, Diamantis D, Histace A, Koulaouzidis A. Artificial intelligence in small bowel capsule endoscopy - current status, challenges and future promise. J Gastroenterol Hepatol 2021; 36:12-19. [PMID: 33448511 DOI: 10.1111/jgh.15341] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 11/05/2020] [Accepted: 11/05/2020] [Indexed: 12/24/2022]
Abstract
Neural network-based solutions are under development to alleviate physicians from the tedious task of small-bowel capsule endoscopy reviewing. Computer-assisted detection is a critical step, aiming to reduce reading times while maintaining accuracy. Weakly supervised solutions have shown promising results; however, video-level evaluations are scarce, and no prospective studies have been conducted yet. Automated characterization (in terms of diagnosis and pertinence) by supervised machine learning solutions is the next step. It relies on large, thoroughly labeled databases, for which preliminary "ground truth" definitions by experts are of tremendous importance. Other developments are under ways, to assist physicians in localizing anatomical landmarks and findings in the small bowel, in measuring lesions, and in rating bowel cleanliness. It is still questioned whether artificial intelligence will enter the market with proprietary, built-in or plug-in software, or with a universal cloud-based service, and how it will be accepted by physicians and patients.
Collapse
Affiliation(s)
- Xavier Dray
- Sorbonne Université, Centre d'Endoscopie Digestive, Hôpital Saint-Antoine, APHP, Paris, France.,ETIS UMR 8051 (CY Paris Cergy University, ENSEA, CNRS), Cergy, France
| | - Dimitris Iakovidis
- Department of Computer Science and Biomedical Informatics, University of Thessaly, Lamia, Greece
| | - Charles Houdeville
- Sorbonne Université, Centre d'Endoscopie Digestive, Hôpital Saint-Antoine, APHP, Paris, France
| | - Rodrigo Jover
- Servicio de Medicina Digestiva, Hospital General Universitario de Alicante, Instituto de Investigación Biomédica ISABIAL, Alicante, Spain
| | - Dimitris Diamantis
- Department of Computer Science and Biomedical Informatics, University of Thessaly, Lamia, Greece
| | - Aymeric Histace
- ETIS UMR 8051 (CY Paris Cergy University, ENSEA, CNRS), Cergy, France
| | | |
Collapse
|
25
|
Barash Y, Azaria L, Soffer S, Margalit Yehuda R, Shlomi O, Ben-Horin S, Eliakim R, Klang E, Kopylov U. Ulcer severity grading in video capsule images of patients with Crohn's disease: an ordinal neural network solution. Gastrointest Endosc 2021; 93:187-192. [PMID: 32535191 DOI: 10.1016/j.gie.2020.05.066] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 05/26/2020] [Indexed: 02/07/2023]
Abstract
BACKGROUND AND AIMS Capsule endoscopy (CE) is an important modality for diagnosis and follow-up of Crohn's disease (CD). The severity of ulcers at endoscopy is significant for predicting the course of CD. Deep learning has been proven accurate in detecting ulcers on CE. However, endoscopic classification of ulcers by deep learning has not been attempted. The aim of our study was to develop a deep learning algorithm for automated grading of CD ulcers on CE. METHODS We retrospectively collected CE images of CD ulcers from our CE database. In experiment 1, the severity of each ulcer was graded by 2 capsule readers based on the PillCam CD classification (grades 1-3 from mild to severe), and the inter-reader variability was evaluated. In experiment 2, a consensus reading by 3 capsule readers was used to train an ordinal convolutional neural network (CNN) to automatically grade images of ulcers, and the resulting algorithm was tested against the consensus reading. A pretraining stage included training the network on images of normal mucosa and ulcerated mucosa. RESULTS Overall, our dataset included 17,640 CE images from 49 patients; 7391 images with mucosal ulcers and 10,249 normal images. A total of 2598 randomly selected pathologic images were further graded from 1 to 3 according to ulcer severity in the 2 different experiments. In experiment 1, overall inter-reader agreement occurred for 31% of the images (345 of 1108) and 76% (752 of 989) for distinction of grades 1 and 3. In experiment 2, the algorithm was trained on 1242 images. It achieved an overall agreement for consensus reading of 67% (166 of 248) and 91% (158 of 173) for distinction of grades 1 and 3. The classification accuracy of the algorithm was 0.91 (95% confidence interval, 0.867-0.954) for grade 1 versus grade 3 ulcers, 0.78 (95% confidence interval, 0.716-0.844) for grade 2 versus grade 3, and 0.624 (95% confidence interval, 0.547-0.701) for grade 1 versus grade 2. CONCLUSIONS CNN achieved high accuracy in detecting severe CD ulcerations. CNN-assisted CE readings in patients with CD can potentially facilitate and improve diagnosis and monitoring in these patients.
Collapse
Affiliation(s)
- Yiftach Barash
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Liran Azaria
- DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Shelly Soffer
- DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Reuma Margalit Yehuda
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Oranit Shlomi
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Shomron Ben-Horin
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Rami Eliakim
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Eyal Klang
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Uri Kopylov
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| |
Collapse
|
26
|
Atsawarungruangkit A, Elfanagely Y, Asombang AW, Rupawala A, Rich HG. Understanding deep learning in capsule endoscopy: Can artificial intelligence enhance clinical practice? Artif Intell Gastrointest Endosc 2020; 1:33-43. [DOI: 10.37126/aige.v1.i2.33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 10/01/2020] [Accepted: 10/13/2020] [Indexed: 02/06/2023] Open
Abstract
Wireless capsule endoscopy (WCE) enables physicians to examine the gastrointestinal tract by transmitting images wirelessly from a disposable capsule to a data recorder. Although WCE is the least invasive endoscopy technique for diagnosing gastrointestinal disorders, interpreting a WCE study requires significant time effort and training. Analysis of images by artificial intelligence, through advances such as machine or deep learning, has been increasingly applied to medical imaging. There has been substantial interest in using deep learning to detect various gastrointestinal disorders based on WCE images. This article discusses basic knowledge of deep learning, applications of deep learning in WCE, and the implementation of deep learning model in a clinical setting. We anticipate continued research investigating the use of deep learning in interpreting WCE studies to generate predictive algorithms and aid in the diagnosis of gastrointestinal disorders.
Collapse
Affiliation(s)
- Amporn Atsawarungruangkit
- Division of Gastroenterology, Warren Alpert School of Medicine, Brown University, Providence, RI 02903, United States
| | - Yousef Elfanagely
- Department of Internal Medicine, Brown University, Providence, RI 02903, United States
| | - Akwi W Asombang
- Division of Gastroenterology, Warren Alpert School of Medicine, Brown University, Providence, RI 02903, United States
| | - Abbas Rupawala
- Division of Gastroenterology, Warren Alpert School of Medicine, Brown University, Providence, RI 02903, United States
| | - Harlan G Rich
- Division of Gastroenterology, Warren Alpert School of Medicine, Brown University, Providence, RI 02903, United States
| |
Collapse
|
27
|
Deep learning for wireless capsule endoscopy: a systematic review and meta-analysis. Gastrointest Endosc 2020; 92:831-839.e8. [PMID: 32334015 DOI: 10.1016/j.gie.2020.04.039] [Citation(s) in RCA: 95] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Accepted: 04/13/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS Deep learning is an innovative algorithm based on neural networks. Wireless capsule endoscopy (WCE) is considered the criterion standard for detecting small-bowel diseases. Manual examination of WCE is time-consuming and can benefit from automatic detection using artificial intelligence (AI). We aimed to perform a systematic review of the current literature pertaining to deep learning implementation in WCE. METHODS We conducted a search in PubMed for all original publications on the subject of deep learning applications in WCE published between January 1, 2016 and December 15, 2019. Evaluation of the risk of bias was performed using tailored Quality Assessment of Diagnostic Accuracy Studies-2. Pooled sensitivity and specificity were calculated. Summary receiver operating characteristic curves were plotted. RESULTS Of the 45 studies retrieved, 19 studies were included. All studies were retrospective. Deep learning applications for WCE included detection of ulcers, polyps, celiac disease, bleeding, and hookworm. Detection accuracy was above 90% for most studies and diseases. Pooled sensitivity and specificity for ulcer detection were .95 (95% confidence interval [CI], .89-.98) and .94 (95% CI, .90-.96), respectively. Pooled sensitivity and specificity for bleeding or bleeding source were .98 (95% CI, .96-.99) and .99 (95% CI, .97-.99), respectively. CONCLUSIONS Deep learning has achieved excellent performance for the detection of a range of diseases in WCE. Notwithstanding, current research is based on retrospective studies with a high risk of bias. Thus, future prospective, multicenter studies are necessary for this technology to be implemented in the clinical use of WCE.
Collapse
|