1
|
Rey JF. As how artificial intelligence is revolutionizing endoscopy. Clin Endosc 2024; 57:302-308. [PMID: 38454543 PMCID: PMC11133999 DOI: 10.5946/ce.2023.230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 10/11/2023] [Accepted: 10/15/2023] [Indexed: 03/09/2024] Open
Abstract
With incessant advances in information technology and its implications in all domains of our lives, artificial intelligence (AI) has emerged as a requirement for improved machine performance. This brings forth the query of how this can benefit endoscopists and improve both diagnostic and therapeutic endoscopy in each part of the gastrointestinal tract. Additionally, it also raises the question of the recent benefits and clinical usefulness of this new technology in daily endoscopic practice. There are two main categories of AI systems: computer-assisted detection (CADe) for lesion detection and computer-assisted diagnosis (CADx) for optical biopsy and lesion characterization. Quality assurance is the next step in the complete monitoring of high-quality colonoscopies. In all cases, computer-aided endoscopy is used, as the overall results rely on the physician. Video capsule endoscopy is a unique example in which a computer operates a device, stores multiple images, and performs an accurate diagnosis. While there are many expectations, we need to standardize and assess various software packages. It is important for healthcare providers to support this new development and make its use an obligation in daily clinical practice. In summary, AI represents a breakthrough in digestive endoscopy. Screening for gastric and colonic cancer detection should be improved, particularly outside expert centers. Prospective and multicenter trials are mandatory before introducing new software into clinical practice.
Collapse
Affiliation(s)
- Jean-Francois Rey
- Institut Arnaut Tzanck Gastrointestinal Unt, Saint Laurent du Var, France
| |
Collapse
|
2
|
Lin J, Zhu S, Yin M, Xue H, Liu L, Liu X, Liu L, Xu C, Zhu J. Few-shot learning for the classification of intestinal tuberculosis and Crohn's disease on endoscopic images: A novel learn-to-learn framework. Heliyon 2024; 10:e26559. [PMID: 38404881 PMCID: PMC10884919 DOI: 10.1016/j.heliyon.2024.e26559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 02/11/2024] [Accepted: 02/15/2024] [Indexed: 02/27/2024] Open
Abstract
Background and aim Standard deep learning methods have been found inadequate in distinguishing between intestinal tuberculosis (ITB) and Crohn's disease (CD), a shortcoming largely attributed to the scarcity of available samples. In light of this limitation, our objective is to develop an innovative few-shot learning (FSL) system, specifically tailored for the efficient categorization and differential diagnosis of CD and ITB, using endoscopic image data with minimal sample requirements. Methods A total of 122 white-light endoscopic images (99 CD images and 23 ITB images) were collected (one ileum image from each patient). A 2-way, 3-shot FSL model that integrated dual transfer learning and metric learning strategies was devised. Xception architecture was selected as the foundation and then underwent a dual transfer process utilizing oesophagitis images sourced from HyperKvasir. Subsequently, the eigenvectors derived from the Xception for each query image were converted into predictive scores, which were calculated using the Euclidean distances to six reference images from the support sets. Results The FSL model, which leverages dual transfer learning, exhibited enhanced performance metrics (AUC 0.81) compared to a model relying on single transfer learning (AUC 0.56) across three evaluation rounds. Additionally, its performance surpassed that of a less experienced endoscopist (AUC 0.56) and even a more seasoned specialist (AUC 0.61). Conclusions The FSL model we have developed demonstrates efficacy in distinguishing between CD and ITB using a limited dataset of endoscopic imagery. FSL holds value for enhancing the diagnostic capabilities of rare conditions.
Collapse
Affiliation(s)
- Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Minyue Yin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Hongchen Xue
- School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu, 215006, China
| | - Lu Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Xiaolin Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Lihe Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Chunfang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| |
Collapse
|
3
|
Zhu S, Gao J, Liu L, Yin M, Lin J, Xu C, Xu C, Zhu J. Public Imaging Datasets of Gastrointestinal Endoscopy for Artificial Intelligence: a Review. J Digit Imaging 2023; 36:2578-2601. [PMID: 37735308 PMCID: PMC10584770 DOI: 10.1007/s10278-023-00844-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 05/03/2023] [Accepted: 05/03/2023] [Indexed: 09/23/2023] Open
Abstract
With the advances in endoscopic technologies and artificial intelligence, a large number of endoscopic imaging datasets have been made public to researchers around the world. This study aims to review and introduce these datasets. An extensive literature search was conducted to identify appropriate datasets in PubMed, and other targeted searches were conducted in GitHub, Kaggle, and Simula to identify datasets directly. We provided a brief introduction to each dataset and evaluated the characteristics of the datasets included. Moreover, two national datasets in progress were discussed. A total of 40 datasets of endoscopic images were included, of which 34 were accessible for use. Basic and detailed information on each dataset was reported. Of all the datasets, 16 focus on polyps, and 6 focus on small bowel lesions. Most datasets (n = 16) were constructed by colonoscopy only, followed by normal gastrointestinal endoscopy and capsule endoscopy (n = 9). This review may facilitate the usage of public dataset resources in endoscopic research.
Collapse
Affiliation(s)
- Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Jingwen Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Lu Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Minyue Yin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Chang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Chunfang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China.
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China.
| |
Collapse
|
4
|
Cho SI, Navarrete-Dechent C, Daneshjou R, Cho HS, Chang SE, Kim SH, Na JI, Han SS. Generation of a Melanoma and Nevus Data Set From Unstandardized Clinical Photographs on the Internet. JAMA Dermatol 2023; 159:1223-1231. [PMID: 37792351 PMCID: PMC10551819 DOI: 10.1001/jamadermatol.2023.3521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 06/16/2023] [Indexed: 10/05/2023]
Abstract
Importance Artificial intelligence (AI) training for diagnosing dermatologic images requires large amounts of clean data. Dermatologic images have different compositions, and many are inaccessible due to privacy concerns, which hinder the development of AI. Objective To build a training data set for discriminative and generative AI from unstandardized internet images of melanoma and nevus. Design, Setting, and Participants In this diagnostic study, a total of 5619 (CAN5600 data set) and 2006 (CAN2000 data set; a manually revised subset of CAN5600) cropped lesion images of either melanoma or nevus were semiautomatically annotated from approximately 500 000 photographs on the internet using convolutional neural networks (CNNs), region-based CNNs, and large mask inpainting. For unsupervised pretraining, 132 673 possible lesions (LESION130k data set) were also created with diversity by collecting images from 18 482 websites in approximately 80 countries. A total of 5000 synthetic images (GAN5000 data set) were generated using the generative adversarial network (StyleGAN2-ADA; training, CAN2000 data set; pretraining, LESION130k data set). Main Outcomes and Measures The area under the receiver operating characteristic curve (AUROC) for determining malignant neoplasms was analyzed. In each test, 1 of the 7 preexisting public data sets (total of 2312 images; including Edinburgh, an SNU subset, Asan test, Waterloo, 7-point criteria evaluation, PAD-UFES-20, and MED-NODE) was used as the test data set. Subsequently, a comparative study was conducted between the performance of the EfficientNet Lite0 CNN on the proposed data set and that trained on the remaining 6 preexisting data sets. Results The EfficientNet Lite0 CNN trained on the annotated or synthetic images achieved higher or equivalent mean (SD) AUROCs to the EfficientNet Lite0 trained using the pathologically confirmed public data sets, including CAN5600 (0.874 [0.042]; P = .02), CAN2000 (0.848 [0.027]; P = .08), and GAN5000 (0.838 [0.040]; P = .31 [Wilcoxon signed rank test]) and the preexisting data sets combined (0.809 [0.063]) by the benefits of increased size of the training data set. Conclusions and Relevance The synthetic data set in this diagnostic study was created using various AI technologies from internet images. A neural network trained on the created data set (CAN5600) performed better than the same network trained on preexisting data sets combined. Both the annotated (CAN5600 and LESION130k) and synthetic (GAN5000) data sets could be shared for AI training and consensus between physicians.
Collapse
Affiliation(s)
| | | | - Roxana Daneshjou
- Department of Dermatology, Stanford University, Stanford, California
| | - Hye Soo Cho
- Department of Dermatology, Asan Medical Center, Ulsan University College of Medicine, Seoul, Korea
| | - Sung Eun Chang
- Department of Dermatology, Asan Medical Center, Ulsan University College of Medicine, Seoul, Korea
| | - Seong Hwan Kim
- Department of Plastic and Reconstructive Surgery, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Jung-Im Na
- Department of Dermatology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seoul, Korea
| | - Seung Seog Han
- Department of Dermatology, I Dermatology Clinic, Seoul, Korea
- IDerma Inc, Seoul, Korea
| |
Collapse
|
5
|
Rey JF. Artificial intelligence in digestive endoscopy: recent advances. Curr Opin Gastroenterol 2023:00001574-990000000-00089. [PMID: 37522929 DOI: 10.1097/mog.0000000000000957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 08/01/2023]
Abstract
PURPOSE OF REVIEW With the incessant advances in information technology and its implications in all domains of our life, artificial intelligence (AI) started to emerge as a need for better machine performance. How it can help endoscopists and what are the areas of interest in improving both diagnostic and therapeutic endoscopy in each part of the gastrointestinal (GI) tract. What are the recent benefits and clinical usefulness of this new technology in daily endoscopic practice. RECENT FINDINGS The two main AI systems categories are computer-assisted detection 'CADe' for lesion detection and computer-assisted diagnosis 'CADx' for optical biopsy and lesion characterization. Multiple softwares are now implemented in endoscopy practice. Other AI systems offer therapeutic assistance such as lesion delineation for complete endoscopic resection or prediction of possible lymphanode after endoscopic treatment. Quality assurance is the coming step with complete monitoring of high-quality colonoscopy. In all cases it is a computer-aid endoscopy as the overall result rely on the physician. Video capsule endoscopy is the unique example were the computer conduct the device, store multiple images, and perform accurate diagnosis. SUMMARY AI is a breakthrough in digestive endoscopy. Screening gastric and colonic cancer detection should be improved especially outside of expert's centers. Prospective and multicenter trials are mandatory before introducing new software in clinical practice.
Collapse
Affiliation(s)
- Jean-Francois Rey
- Arnault Tzanck Institute, 116 rue du commandant Cahuzac, Saint Laurent du var, France
| |
Collapse
|
6
|
Bian H, Jiang M, Qian J. The investigation of constraints in implementing robust AI colorectal polyp detection for sustainable healthcare system. PLoS One 2023; 18:e0288376. [PMID: 37437026 DOI: 10.1371/journal.pone.0288376] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 06/24/2023] [Indexed: 07/14/2023] Open
Abstract
Colorectal cancer (CRC) is one of the significant threats to public health and the sustainable healthcare system during urbanization. As the primary method of screening, colonoscopy can effectively detect polyps before they evolve into cancerous growths. However, the current visual inspection by endoscopists is insufficient in providing consistently reliable polyp detection for colonoscopy videos and images in CRC screening. Artificial Intelligent (AI) based object detection is considered as a potent solution to overcome visual inspection limitations and mitigate human errors in colonoscopy. This study implemented a YOLOv5 object detection model to investigate the performance of mainstream one-stage approaches in colorectal polyp detection. Meanwhile, a variety of training datasets and model structure configurations are employed to identify the determinative factors in practical applications. The designed experiments show that the model yields acceptable results assisted by transfer learning, and highlight that the primary constraint in implementing deep learning polyp detection comes from the scarcity of training data. The model performance was improved by 15.6% in terms of average precision (AP) when the original training dataset was expanded. Furthermore, the experimental results were analysed from a clinical perspective to identify potential causes of false positives. Besides, the quality management framework is proposed for future dataset preparation and model development in AI-driven polyp detection tasks for smart healthcare solutions.
Collapse
Affiliation(s)
- Haitao Bian
- College of Safety Science and Engineering, Nanjing Tech University, Nanjing, Jiangsu, China
| | - Min Jiang
- KLA Corporation, Milpitas, California, United States of America
| | - Jingjing Qian
- Department of Gastroenterology, The Second Hospital of Nanjing, Nanjing University of Chinese Medicine, Nanjing, Jiangsu, China
| |
Collapse
|
7
|
Dhaliwal J, Walsh CM. Artificial Intelligence in Pediatric Endoscopy: Current Status and Future Applications. Gastrointest Endosc Clin N Am 2023; 33:291-308. [PMID: 36948747 DOI: 10.1016/j.giec.2022.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2023]
Abstract
The application of artificial intelligence (AI) has great promise for improving pediatric endoscopy. The majority of preclinical studies have been undertaken in adults, with the greatest progress being made in the context of colorectal cancer screening and surveillance. This development has only been possible with advances in deep learning, like the convolutional neural network model, which has enabled real-time detection of pathology. Comparatively, the majority of deep learning systems developed in inflammatory bowel disease have focused on predicting disease severity and were developed using still images rather than videos. The application of AI to pediatric endoscopy is in its infancy, thus providing an opportunity to develop clinically meaningful and fair systems that do not perpetuate societal biases. In this review, we provide an overview of AI, summarize the advances of AI in endoscopy, and describe its potential application to pediatric endoscopic practice and education.
Collapse
Affiliation(s)
- Jasbir Dhaliwal
- Division of Pediatric Gastroenterology, Hepatology and Nutrition, Cincinnati Children's Hospital Medictal Center, University of Cincinnati, OH, USA.
| | - Catharine M Walsh
- Division of Gastroenterology, Hepatology, and Nutrition, and the SickKids Research and Learning Institutes, The Hospital for Sick Children, Toronto, ON, Canada; Department of Paediatrics and The Wilson Centre, University of Toronto, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
8
|
Kamba S, Sumiyama K. Benchmark test for the characterization of colorectal polyps using a computer-aided diagnosis with a publicly accessible database. Dig Endosc 2023. [PMID: 36944582 DOI: 10.1111/den.14540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 02/20/2023] [Indexed: 03/23/2023]
Affiliation(s)
- Shunsuke Kamba
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
- Developmental Endoscopy Unit, Gastroenterology and Hepatology, Mayo Clinic, Rochester, USA
| | - Kazuki Sumiyama
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
| |
Collapse
|
9
|
Nogueira-Rodríguez A, Glez-Peña D, Reboiro-Jato M, López-Fernández H. Negative Samples for Improving Object Detection-A Case Study in AI-Assisted Colonoscopy for Polyp Detection. Diagnostics (Basel) 2023; 13:diagnostics13050966. [PMID: 36900110 PMCID: PMC10001273 DOI: 10.3390/diagnostics13050966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 03/01/2023] [Indexed: 03/08/2023] Open
Abstract
Deep learning object-detection models are being successfully applied to develop computer-aided diagnosis systems for aiding polyp detection during colonoscopies. Here, we evidence the need to include negative samples for both (i) reducing false positives during the polyp-finding phase, by including images with artifacts that may confuse the detection models (e.g., medical instruments, water jets, feces, blood, excessive proximity of the camera to the colon wall, blurred images, etc.) that are usually not included in model development datasets, and (ii) correctly estimating a more realistic performance of the models. By retraining our previously developed YOLOv3-based detection model with a dataset that includes 15% of additional not-polyp images with a variety of artifacts, we were able to generally improve its F1 performance in our internal test datasets (from an average F1 of 0.869 to 0.893), which now include such type of images, as well as in four public datasets that include not-polyp images (from an average F1 of 0.695 to 0.722).
Collapse
Affiliation(s)
- Alba Nogueira-Rodríguez
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
- Correspondence:
| | - Daniel Glez-Peña
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Miguel Reboiro-Jato
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Hugo López-Fernández
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| |
Collapse
|
10
|
Halvorsen N, Mori Y. Open access database for artificial intelligence research. Gastrointest Endosc 2023; 97:200-201. [PMID: 36567202 DOI: 10.1016/j.gie.2022.10.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 10/09/2022] [Indexed: 12/27/2022]
Affiliation(s)
- Natalie Halvorsen
- Clinical Effectiveness Research Group, University of Oslo and Oslo University Hospital, Oslo, Norway
| | - Yuichi Mori
- Clinical Effectiveness Research Group, University of Oslo and Oslo University Hospital; Department of Transplantation Medicine, Oslo University Hospital, Oslo, Norway; Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| |
Collapse
|