1
|
Gong EJ, Bang CS, Lee JJ. Computer-aided diagnosis in real-time endoscopy for all stages of gastric carcinogenesis: Development and validation study. United European Gastroenterol J 2024; 12:487-495. [PMID: 38400815 DOI: 10.1002/ueg2.12551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 01/14/2024] [Indexed: 02/26/2024] Open
Abstract
OBJECTIVE Using endoscopic images, we have previously developed computer-aided diagnosis models to predict the histopathology of gastric neoplasms. However, no model that categorizes every stage of gastric carcinogenesis has been published. In this study, a deep-learning-based diagnosis model was developed and validated to automatically classify all stages of gastric carcinogenesis, including atrophy and intestinal metaplasia, in endoscopy images. DESIGN A total of 18,701 endoscopic images were collected retrospectively and randomly divided into train, validation, and internal-test datasets in an 8:1:1 ratio. The primary outcome was lesion-classification accuracy in six categories: normal/atrophy/intestinal metaplasia/dysplasia/early /advanced gastric cancer. External-validation of performance in the established model used 1427 novel images from other institutions that were not used in training, validation, or internal-tests. RESULTS The internal-test lesion-classification accuracy was 91.2% (95% confidence interval: 89.9%-92.5%). For performance validation, the established model achieved an accuracy of 82.3% (80.3%-84.3%). The external-test per-class receiver operating characteristic in the diagnosis of atrophy and intestinal metaplasia was 93.4 ± 0% and 91.3 ± 0%, respectively. CONCLUSIONS The established model demonstrated high performance in the diagnosis of preneoplastic lesions (atrophy and intestinal metaplasia) as well as gastric neoplasms.
Collapse
Affiliation(s)
- Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Korea
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Korea
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Korea
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon, Korea
| |
Collapse
|
2
|
Wu R, Qin K, Fang Y, Xu Y, Zhang H, Li W, Luo X, Han Z, Liu S, Li Q. Application of the convolution neural network in determining the depth of invasion of gastrointestinal cancer: a systematic review and meta-analysis. J Gastrointest Surg 2024; 28:538-547. [PMID: 38583908 DOI: 10.1016/j.gassur.2023.12.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 12/16/2023] [Accepted: 12/30/2023] [Indexed: 04/09/2024]
Abstract
BACKGROUND With the development of endoscopic technology, endoscopic submucosal dissection (ESD) has been widely used in the treatment of gastrointestinal tumors. It is necessary to evaluate the depth of tumor invasion before the application of ESD. The convolution neural network (CNN) is a type of artificial intelligence that has the potential to assist in the classification of the depth of invasion in endoscopic images. This meta-analysis aimed to evaluate the performance of CNN in determining the depth of invasion of gastrointestinal tumors. METHODS A search on PubMed, Web of Science, and SinoMed was performed to collect the original publications about the use of CNN in determining the depth of invasion of gastrointestinal neoplasms. Pooled sensitivity and specificity were calculated using an exact binominal rendition of the bivariate mixed-effects regression model. I2 was used for the evaluation of heterogeneity. RESULTS A total of 17 articles were included; the pooled sensitivity was 84% (95% CI, 0.81-0.88), specificity was 91% (95% CI, 0.85-0.94), and the area under the curve (AUC) was 0.93 (95% CI, 0.90-0.95). The performance of CNN was significantly better than that of endoscopists (AUC: 0.93 vs 0.83, respectively; P = .0005). CONCLUSION Our review revealed that CNN is one of the most effective methods of endoscopy to evaluate the depth of invasion of early gastrointestinal tumors, which has the potential to work as a remarkable tool for clinical endoscopists to make decisions on whether the lesion is feasible for endoscopic treatment.
Collapse
Affiliation(s)
- Ruo Wu
- Nanfang Hospital (The First School of Clinical Medicine), Southern Medical University, Guangzhou, Guangdong, China
| | - Kaiwen Qin
- Department of Gastroenterology, Guangdong Provincial Key Laboratory of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Yuxin Fang
- Department of Gastroenterology, Guangdong Provincial Key Laboratory of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Yuyuan Xu
- Department of Hepatology Unit and Infectious Diseases, State Key Laboratory of Organ Failure Research, Guangdong Provincial Key Laboratory of Viral Hepatitis Research, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Haonan Zhang
- Department of Gastroenterology, Guangdong Provincial Key Laboratory of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Wenhua Li
- Nanfang Hospital (The First School of Clinical Medicine), Southern Medical University, Guangzhou, Guangdong, China
| | - Xiaobei Luo
- Department of Gastroenterology, Guangdong Provincial Key Laboratory of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Zelong Han
- Department of Gastroenterology, Guangdong Provincial Key Laboratory of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Side Liu
- Department of Gastroenterology, Guangdong Provincial Key Laboratory of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China; Pazhou Lab, Guangzhou, Guangdong, China
| | - Qingyuan Li
- Department of Gastroenterology, Guangdong Provincial Key Laboratory of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China.
| |
Collapse
|
3
|
Thirunavukarasu AJ, Elangovan K, Gutierrez L, Hassan R, Li Y, Tan TF, Cheng H, Teo ZL, Lim G, Ting DSW. Clinical performance of automated machine learning: A systematic review. ANNALS OF THE ACADEMY OF MEDICINE, SINGAPORE 2024; 53:187-207. [PMID: 38920245 DOI: 10.47102/annals-acadmedsg.2023113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Introduction Automated machine learning (autoML) removes technical and technological barriers to building artificial intelligence models. We aimed to summarise the clinical applications of autoML, assess the capabilities of utilised platforms, evaluate the quality of the evidence trialling autoML, and gauge the performance of autoML platforms relative to conventionally developed models, as well as each other. Method This review adhered to a prospectively registered protocol (PROSPERO identifier CRD42022344427). The Cochrane Library, Embase, MEDLINE and Scopus were searched from inception to 11 July 2022. Two researchers screened abstracts and full texts, extracted data and conducted quality assessment. Disagreement was resolved through discussion and if required, arbitration by a third researcher. Results There were 26 distinct autoML platforms featured in 82 studies. Brain and lung disease were the most common fields of study of 22 specialties. AutoML exhibited variable performance: area under the receiver operator characteristic curve (AUCROC) 0.35-1.00, F1-score 0.16-0.99, area under the precision-recall curve (AUPRC) 0.51-1.00. AutoML exhibited the highest AUCROC in 75.6% trials; the highest F1-score in 42.3% trials; and the highest AUPRC in 83.3% trials. In autoML platform comparisons, AutoPrognosis and Amazon Rekognition performed strongest with unstructured and structured data, respectively. Quality of reporting was poor, with a median DECIDE-AI score of 14 of 27. Conclusion A myriad of autoML platforms have been applied in a variety of clinical contexts. The performance of autoML compares well to bespoke computational and clinical benchmarks. Further work is required to improve the quality of validation studies. AutoML may facilitate a transition to data-centric development, and integration with large language models may enable AI to build itself to fulfil user-defined goals.
Collapse
Affiliation(s)
- Arun James Thirunavukarasu
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore
- University of Cambridge School of Clinical Medicine, University of Cambridge, Cambridge, United Kingdom
| | - Kabilan Elangovan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore
| | - Laura Gutierrez
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore
| | - Refaat Hassan
- University of Cambridge School of Clinical Medicine, University of Cambridge, Cambridge, United Kingdom
| | - Yong Li
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Ting Fang Tan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore
| | - Haoran Cheng
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
- Rollins School of Public Health, Emory University, Atlanta, Georgia, USA
| | | | - Gilbert Lim
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore
| | - Daniel Shu Wei Ting
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
- Singapore National Eye Centre, Singapore
| |
Collapse
|
4
|
Klang E, Sourosh A, Nadkarni GN, Sharif K, Lahat A. Deep Learning and Gastric Cancer: Systematic Review of AI-Assisted Endoscopy. Diagnostics (Basel) 2023; 13:3613. [PMID: 38132197 PMCID: PMC10742887 DOI: 10.3390/diagnostics13243613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/23/2023] [Accepted: 12/02/2023] [Indexed: 12/23/2023] Open
Abstract
BACKGROUND Gastric cancer (GC), a significant health burden worldwide, is typically diagnosed in the advanced stages due to its non-specific symptoms and complex morphological features. Deep learning (DL) has shown potential for improving and standardizing early GC detection. This systematic review aims to evaluate the current status of DL in pre-malignant, early-stage, and gastric neoplasia analysis. METHODS A comprehensive literature search was conducted in PubMed/MEDLINE for original studies implementing DL algorithms for gastric neoplasia detection using endoscopic images. We adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The focus was on studies providing quantitative diagnostic performance measures and those comparing AI performance with human endoscopists. RESULTS Our review encompasses 42 studies that utilize a variety of DL techniques. The findings demonstrate the utility of DL in GC classification, detection, tumor invasion depth assessment, cancer margin delineation, lesion segmentation, and detection of early-stage and pre-malignant lesions. Notably, DL models frequently matched or outperformed human endoscopists in diagnostic accuracy. However, heterogeneity in DL algorithms, imaging techniques, and study designs precluded a definitive conclusion about the best algorithmic approach. CONCLUSIONS The promise of artificial intelligence in improving and standardizing gastric neoplasia detection, diagnosis, and segmentation is significant. This review is limited by predominantly single-center studies and undisclosed datasets used in AI training, impacting generalizability and demographic representation. Further, retrospective algorithm training may not reflect actual clinical performance, and a lack of model details hinders replication efforts. More research is needed to substantiate these findings, including larger-scale multi-center studies, prospective clinical trials, and comprehensive technical reporting of DL algorithms and datasets, particularly regarding the heterogeneity in DL algorithms and study designs.
Collapse
Affiliation(s)
- Eyal Klang
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
- ARC Innovation Center, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel
| | - Ali Sourosh
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Girish N. Nadkarni
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Kassem Sharif
- Department of Gastroenterology, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel;
| | - Adi Lahat
- Department of Gastroenterology, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel;
| |
Collapse
|
5
|
Gong EJ, Bang CS, Lee JJ, Jeong HM, Baik GH, Jeong JH, Dick S, Lee GH. Clinical Decision Support System for All Stages of Gastric Carcinogenesis in Real-Time Endoscopy: Model Establishment and Validation Study. J Med Internet Res 2023; 25:e50448. [PMID: 37902818 PMCID: PMC10644184 DOI: 10.2196/50448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/27/2023] [Accepted: 10/12/2023] [Indexed: 10/31/2023] Open
Abstract
BACKGROUND Our research group previously established a deep-learning-based clinical decision support system (CDSS) for real-time endoscopy-based detection and classification of gastric neoplasms. However, preneoplastic conditions, such as atrophy and intestinal metaplasia (IM) were not taken into account, and there is no established model that classifies all stages of gastric carcinogenesis. OBJECTIVE This study aims to build and validate a CDSS for real-time endoscopy for all stages of gastric carcinogenesis, including atrophy and IM. METHODS A total of 11,868 endoscopic images were used for training and internal testing. The primary outcomes were lesion classification accuracy (6 classes: advanced gastric cancer, early gastric cancer, dysplasia, atrophy, IM, and normal) and atrophy and IM lesion segmentation rates for the segmentation model. The following tests were carried out to validate the performance of lesion classification accuracy: (1) external testing using 1282 images from another institution and (2) evaluation of the classification accuracy of atrophy and IM in real-world procedures in a prospective manner. To estimate the clinical utility, 2 experienced endoscopists were invited to perform a blind test with the same data set. A CDSS was constructed by combining the established 6-class lesion classification model and the preneoplastic lesion segmentation model with the previously established lesion detection model. RESULTS The overall lesion classification accuracy (95% CI) was 90.3% (89%-91.6%) in the internal test. For the performance validation, the CDSS achieved 85.3% (83.4%-97.2%) overall accuracy. The per-class external test accuracies for atrophy and IM were 95.3% (92.6%-98%) and 89.3% (85.4%-93.2%), respectively. CDSS-assisted endoscopy showed an accuracy of 92.1% (88.8%-95.4%) for atrophy and 95.5% (92%-99%) for IM in the real-world application of 522 consecutive screening endoscopies. There was no significant difference in the overall accuracy between the invited endoscopists and established CDSS in the prospective real-clinic evaluation (P=.23). The CDSS demonstrated a segmentation rate of 93.4% (95% CI 92.4%-94.4%) for atrophy or IM lesion segmentation in the internal testing. CONCLUSIONS The CDSS achieved high performance in terms of computer-aided diagnosis of all stages of gastric carcinogenesis and demonstrated real-world application potential.
Collapse
Affiliation(s)
- Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea
- Department of Anesthesiology, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Hae Min Jeong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea
| | | | | | | |
Collapse
|
6
|
Wang Z, Liu Y, Niu X. Application of artificial intelligence for improving early detection and prediction of therapeutic outcomes for gastric cancer in the era of precision oncology. Semin Cancer Biol 2023; 93:83-96. [PMID: 37116818 DOI: 10.1016/j.semcancer.2023.04.009] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 04/12/2023] [Accepted: 04/24/2023] [Indexed: 04/30/2023]
Abstract
Gastric cancer is a leading contributor to cancer incidence and mortality globally. Recently, artificial intelligence approaches, particularly machine learning and deep learning, are rapidly reshaping the full spectrum of clinical management for gastric cancer. Machine learning is formed from computers running repeated iterative models for progressively improving performance on a particular task. Deep learning is a subtype of machine learning on the basis of multilayered neural networks inspired by the human brain. This review summarizes the application of artificial intelligence algorithms to multi-dimensional data including clinical and follow-up information, conventional images (endoscope, histopathology, and computed tomography (CT)), molecular biomarkers, etc. to improve the risk surveillance of gastric cancer with established risk factors; the accuracy of diagnosis, and survival prediction among established gastric cancer patients; and the prediction of treatment outcomes for assisting clinical decision making. Therefore, artificial intelligence makes a profound impact on almost all aspects of gastric cancer from improving diagnosis to precision medicine. Despite this, most established artificial intelligence-based models are in a research-based format and often have limited value in real-world clinical practice. With the increasing adoption of artificial intelligence in clinical use, we anticipate the arrival of artificial intelligence-powered gastric cancer care.
Collapse
Affiliation(s)
- Zhe Wang
- Department of Digestive Diseases 1, Cancer Hospital of China Medical University, Cancer Hospital of Dalian University of Technology, Liaoning Cancer Hospital & Institute, Shenyang 110042, Liaoning, China
| | - Yang Liu
- Department of Gastric Surgery, Cancer Hospital of China Medical University, Cancer Hospital of Dalian University of Technology, Liaoning Cancer Hospital & Institute, Shenyang 110042, Liaoning, China.
| | - Xing Niu
- China Medical University, Shenyang 110122, Liaoning, China.
| |
Collapse
|
7
|
Gong EJ, Bang CS, Lee JJ, Baik GH, Lim H, Jeong JH, Choi SW, Cho J, Kim DY, Lee KB, Shin SI, Sigmund D, Moon BI, Park SC, Lee SH, Bang KB, Son DS. Deep learning-based clinical decision support system for gastric neoplasms in real-time endoscopy: development and validation study. Endoscopy 2023; 55:701-708. [PMID: 36754065 DOI: 10.1055/a-2031-0691] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
BACKGROUND : Deep learning models have previously been established to predict the histopathology and invasion depth of gastric lesions using endoscopic images. This study aimed to establish and validate a deep learning-based clinical decision support system (CDSS) for the automated detection and classification (diagnosis and invasion depth prediction) of gastric neoplasms in real-time endoscopy. METHODS : The same 5017 endoscopic images that were employed to establish previous models were used for the training data. The primary outcomes were: (i) the lesion detection rate for the detection model, and (ii) the lesion classification accuracy for the classification model. For performance validation of the lesion detection model, 2524 real-time procedures were tested in a randomized pilot study. Consecutive patients were allocated either to CDSS-assisted or conventional screening endoscopy. The lesion detection rate was compared between the groups. For performance validation of the lesion classification model, a prospective multicenter external test was conducted using 3976 novel images from five institutions. RESULTS : The lesion detection rate was 95.6 % (internal test). On performance validation, CDSS-assisted endoscopy showed a higher lesion detection rate than conventional screening endoscopy, although statistically not significant (2.0 % vs. 1.3 %; P = 0.21) (randomized study). The lesion classification rate was 89.7 % in the four-class classification (advanced gastric cancer, early gastric cancer, dysplasia, and non-neoplastic) and 89.2 % in the invasion depth prediction (mucosa confined or submucosa invaded; internal test). On performance validation, the CDSS reached 81.5 % accuracy in the four-class classification and 86.4 % accuracy in the binary classification (prospective multicenter external test). CONCLUSIONS : The CDSS demonstrated its potential for real-life clinical application and high performance in terms of lesion detection and classification of detected lesions in the stomach.
Collapse
Affiliation(s)
- Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, South Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, South Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, South Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, South Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, South Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, South Korea
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, South Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, South Korea
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, South Korea
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon, South Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, South Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, South Korea
| | - Hyun Lim
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, South Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, South Korea
| | | | | | | | | | | | | | | | | | - Sung Chul Park
- Department of Internal Medicine, School of Medicine, Kangwon National University, Chuncheon, South Korea
| | - Sang Hoon Lee
- Department of Internal Medicine, School of Medicine, Kangwon National University, Chuncheon, South Korea
| | - Ki Bae Bang
- Department of Internal Medicine, Dankook University College of Medicine, Cheonan, South Korea
| | - Dae-Soon Son
- Division of Data Science, Data Science Convergence Research Center, Hallym University, Chuncheon, South Korea
| |
Collapse
|
8
|
Chung J, Oh DJ, Park J, Kim SH, Lim YJ. Automatic Classification of GI Organs in Wireless Capsule Endoscopy Using a No-Code Platform-Based Deep Learning Model. Diagnostics (Basel) 2023; 13:diagnostics13081389. [PMID: 37189489 DOI: 10.3390/diagnostics13081389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 04/03/2023] [Accepted: 04/10/2023] [Indexed: 05/17/2023] Open
Abstract
The first step in reading a capsule endoscopy (CE) is determining the gastrointestinal (GI) organ. Because CE produces too many inappropriate and repetitive images, automatic organ classification cannot be directly applied to CE videos. In this study, we developed a deep learning algorithm to classify GI organs (the esophagus, stomach, small bowel, and colon) using a no-code platform, applied it to CE videos, and proposed a novel method to visualize the transitional area of each GI organ. We used training data (37,307 images from 24 CE videos) and test data (39,781 images from 30 CE videos) for model development. This model was validated using 100 CE videos that included "normal", "blood", "inflamed", "vascular", and "polypoid" lesions. Our model achieved an overall accuracy of 0.98, precision of 0.89, recall of 0.97, and F1 score of 0.92. When we validated this model relative to the 100 CE videos, it produced average accuracies for the esophagus, stomach, small bowel, and colon of 0.98, 0.96, 0.87, and 0.87, respectively. Increasing the AI score's cut-off improved most performance metrics in each organ (p < 0.05). To locate a transitional area, we visualized the predicted results over time, and setting the cut-off of the AI score to 99.9% resulted in a better intuitive presentation than the baseline. In conclusion, the GI organ classification AI model demonstrated high accuracy on CE videos. The transitional area could be more easily located by adjusting the cut-off of the AI score and visualization of its result over time.
Collapse
Affiliation(s)
- Joowon Chung
- Department of Internal Medicine, Nowon Eulji Medical Center, Eulji University School of Medicine, Seoul 01830, Republic of Korea
| | - Dong Jun Oh
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea
| | - Junseok Park
- Department of Internal Medicine, Digestive Disease Center, Institute for Digestive Research, Soonchunhyang University College of Medicine, Seoul 04401, Republic of Korea
| | - Su Hwan Kim
- Department of Internal Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea
| |
Collapse
|
9
|
Hamamoto R, Koyama T, Kouno N, Yasuda T, Yui S, Sudo K, Hirata M, Sunami K, Kubo T, Takasawa K, Takahashi S, Machino H, Kobayashi K, Asada K, Komatsu M, Kaneko S, Yatabe Y, Yamamoto N. Introducing AI to the molecular tumor board: one direction toward the establishment of precision medicine using large-scale cancer clinical and biological information. Exp Hematol Oncol 2022; 11:82. [PMID: 36316731 PMCID: PMC9620610 DOI: 10.1186/s40164-022-00333-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 10/05/2022] [Indexed: 11/10/2022] Open
Abstract
Since U.S. President Barack Obama announced the Precision Medicine Initiative in his New Year's State of the Union address in 2015, the establishment of a precision medicine system has been emphasized worldwide, particularly in the field of oncology. With the advent of next-generation sequencers specifically, genome analysis technology has made remarkable progress, and there are active efforts to apply genome information to diagnosis and treatment. Generally, in the process of feeding back the results of next-generation sequencing analysis to patients, a molecular tumor board (MTB), consisting of experts in clinical oncology, genetic medicine, etc., is established to discuss the results. On the other hand, an MTB currently involves a large amount of work, with humans searching through vast databases and literature, selecting the best drug candidates, and manually confirming the status of available clinical trials. In addition, as personalized medicine advances, the burden on MTB members is expected to increase in the future. Under these circumstances, introducing cutting-edge artificial intelligence (AI) technology and information and communication technology to MTBs while reducing the burden on MTB members and building a platform that enables more accurate and personalized medical care would be of great benefit to patients. In this review, we introduced the latest status of elemental technologies that have potential for AI utilization in MTB, and discussed issues that may arise in the future as we progress with AI implementation.
Collapse
Affiliation(s)
- Ryuji Hamamoto
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Takafumi Koyama
- grid.272242.30000 0001 2168 5385Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Nobuji Kouno
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.258799.80000 0004 0372 2033Department of Surgery, Graduate School of Medicine, Kyoto University, Yoshida-konoe-cho, Sakyo-ku, Kyoto, 606-8303 Japan
| | - Tomohiro Yasuda
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.417547.40000 0004 1763 9564Research and Development Group, Hitachi, Ltd., 1-280 Higashi-koigakubo, Kokubunji, Tokyo, 185-8601 Japan
| | - Shuntaro Yui
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.417547.40000 0004 1763 9564Research and Development Group, Hitachi, Ltd., 1-280 Higashi-koigakubo, Kokubunji, Tokyo, 185-8601 Japan
| | - Kazuki Sudo
- grid.272242.30000 0001 2168 5385Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.272242.30000 0001 2168 5385Department of Medical Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Makoto Hirata
- grid.272242.30000 0001 2168 5385Department of Genetic Medicine and Services, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Kuniko Sunami
- grid.272242.30000 0001 2168 5385Department of Laboratory Medicine, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Takashi Kubo
- grid.272242.30000 0001 2168 5385Department of Laboratory Medicine, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Ken Takasawa
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Satoshi Takahashi
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Hidenori Machino
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Kazuma Kobayashi
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Ken Asada
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Masaaki Komatsu
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Syuzo Kaneko
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Yasushi Yatabe
- grid.272242.30000 0001 2168 5385Department of Diagnostic Pathology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.272242.30000 0001 2168 5385Division of Molecular Pathology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Noboru Yamamoto
- grid.272242.30000 0001 2168 5385Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| |
Collapse
|
10
|
Gong EJ, Bang CS, Lee JJ, Yang YJ, Baik GH. Impact of the Volume and Distribution of Training Datasets in the Development of Deep-Learning Models for the Diagnosis of Colorectal Polyps in Endoscopy Images. J Pers Med 2022; 12:jpm12091361. [PMID: 36143146 PMCID: PMC9505038 DOI: 10.3390/jpm12091361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/13/2022] [Accepted: 08/19/2022] [Indexed: 11/16/2022] Open
Abstract
Background: Establishment of an artificial intelligence model in gastrointestinal endoscopy has no standardized dataset. The optimal volume or class distribution of training datasets has not been evaluated. An artificial intelligence model was previously created by the authors to classify endoscopic images of colorectal polyps into four categories, including advanced colorectal cancer, early cancers/high-grade dysplasia, tubular adenoma, and nonneoplasm. The aim of this study was to evaluate the impact of the volume and distribution of training dataset classes in the development of deep-learning models for colorectal polyp histopathology prediction from endoscopic images. Methods: The same 3828 endoscopic images that were used to create earlier models were used. An additional 6838 images were used to find the optimal volume and class distribution for a deep-learning model. Various amounts of data volume and class distributions were tried to establish deep-learning models. The training of deep-learning models uniformly used no-code platform Neuro-T. Accuracy was the primary outcome on four-class prediction. Results: The highest internal-test classification accuracy in the original dataset, doubled dataset, and tripled dataset was commonly shown by doubling the proportion of data for fewer categories (2:2:1:1 for advanced colorectal cancer: early cancers/high-grade dysplasia: tubular adenoma: non-neoplasm). Doubling the proportion of data for fewer categories in the original dataset showed the highest accuracy (86.4%, 95% confidence interval: 85.0–97.8%) compared to that of the doubled or tripled dataset. The total required number of images in this performance was only 2418 images. Gradient-weighted class activation mapping confirmed that the part that the deep-learning model pays attention to coincides with the part that the endoscopist pays attention to. Conclusion: As a result of a data-volume-dependent performance plateau in the classification model of colonoscopy, a dataset that has been doubled or tripled is not always beneficial to training. Deep-learning models would be more accurate if the proportion of fewer category lesions was increased.
Collapse
Affiliation(s)
- Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea
- Correspondence: ; Tel.: +82-33-240-5821; Fax: +82-33-241-8064
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Young Joo Yang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| |
Collapse
|
11
|
Gong EJ, Bang CS, Jung K, Kim SJ, Kim JW, Seo SI, Lee U, Maeng YB, Lee YJ, Lee JI, Baik GH, Lee JJ. Deep-Learning for the Diagnosis of Esophageal Cancers and Precursor Lesions in Endoscopic Images: A Model Establishment and Nationwide Multicenter Performance Verification Study. J Pers Med 2022; 12:jpm12071052. [PMID: 35887549 PMCID: PMC9320232 DOI: 10.3390/jpm12071052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 06/21/2022] [Accepted: 06/22/2022] [Indexed: 12/24/2022] Open
Abstract
Background: Suspicion of lesions and prediction of the histology of esophageal cancers or premalignant lesions in endoscopic images are not yet accurate. The local feature selection and optimization functions of the model enabled an accurate analysis of images in deep learning. Objectives: To establish a deep-learning model to diagnose esophageal cancers, precursor lesions, and non-neoplasms using endoscopic images. Additionally, a nationwide prospective multicenter performance verification was conducted to confirm the possibility of real-clinic application. Methods: A total of 5162 white-light endoscopic images were used for the training and internal test of the model classifying esophageal cancers, dysplasias, and non-neoplasms. A no-code deep-learning tool was used for the establishment of the deep-learning model. Prospective multicenter external tests using 836 novel images from five hospitals were conducted. The primary performance metric was the external-test accuracy. An attention map was generated and analyzed to gain the explainability. Results: The established model reached 95.6% (95% confidence interval: 94.2–97.0%) internal-test accuracy (precision: 78.0%, recall: 93.9%, F1 score: 85.2%). Regarding the external tests, the accuracy ranged from 90.0% to 95.8% (overall accuracy: 93.9%). There was no statistical difference in the number of correctly identified the region of interest for the external tests between the expert endoscopist and the established model using attention map analysis (P = 0.11). In terms of the dysplasia subgroup, the number of correctly identified regions of interest was higher in the deep-learning model than in the endoscopist group, although statistically insignificant (P = 0.48). Conclusions: We established a deep-learning model that accurately classifies esophageal cancers, precursor lesions, and non-neoplasms. This model confirmed the potential for generalizability through multicenter external tests and explainability through the attention map analysis.
Collapse
Affiliation(s)
- Eun Jeong Gong
- Department of Internal Medicine, Gangneung Asan Hospital, University of Ulsan College of Medicine, Gangneung 25440, Korea;
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (S.I.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24252, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Hallym University College of Medicine, Chuncheon 24253, Korea
- Correspondence: ; Tel.: +82-33-240-5821; Fax: +82-33-241-8064
| | - Kyoungwon Jung
- Department of Internal Medicine, Kosin University College of Medicine, Busan 49267, Korea;
| | - Su Jin Kim
- Department of Internal Medicine, Pusan National University School of Medicine and Biomedical Research Institute, Pusan National University Yangsan Hospital, Yangsan 50615, Korea;
| | - Jong Wook Kim
- Department of Internal Medicine, Inje University Ilsan Paik Hospital, Goyang 10380, Korea;
| | - Seung In Seo
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (S.I.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24252, Korea
| | - Uhmyung Lee
- Department of Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (U.L.); (Y.B.M.)
| | - You Bin Maeng
- Department of Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (U.L.); (Y.B.M.)
| | - Ye Ji Lee
- Department of Biomedical Science, Hallym University, Chuncheon 24252, Korea;
| | - Jae Ick Lee
- Department of Life Science, Hallym University, Chuncheon 24252, Korea;
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (S.I.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24252, Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Hallym University College of Medicine, Chuncheon 24253, Korea
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| |
Collapse
|
12
|
No-Code Platform-Based Deep-Learning Models for Prediction of Colorectal Polyp Histology from White-Light Endoscopy Images: Development and Performance Verification. J Pers Med 2022; 12:jpm12060963. [PMID: 35743748 PMCID: PMC9225479 DOI: 10.3390/jpm12060963] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 05/27/2022] [Accepted: 06/10/2022] [Indexed: 12/17/2022] Open
Abstract
Background: The authors previously developed deep-learning models for the prediction of colorectal polyp histology (advanced colorectal cancer, early cancer/high-grade dysplasia, tubular adenoma with or without low-grade dysplasia, or non-neoplasm) from endoscopic images. While the model achieved 67.3% internal-test accuracy and 79.2% external-test accuracy, model development was labour-intensive and required specialised programming expertise. Moreover, the 240-image external-test dataset included only three advanced and eight early cancers, so it was difficult to generalise model performance. These limitations may be mitigated by deep-learning models developed using no-code platforms. Objective: To establish no-code platform-based deep-learning models for the prediction of colorectal polyp histology from white-light endoscopy images and compare their diagnostic performance with traditional models. Methods: The same 3828 endoscopic images used to establish previous models were used to establish new models based on no-code platforms Neuro-T, VLAD, and Create ML-Image Classifier. A prospective multicentre validation study was then conducted using 3818 novel images. The primary outcome was the accuracy of four-category prediction. Results: The model established using Neuro-T achieved the highest internal-test accuracy (75.3%, 95% confidence interval: 71.0–79.6%) and external-test accuracy (80.2%, 76.9–83.5%) but required the longest training time. In contrast, the model established using Create ML-Image Classifier required only 3 min for training and still achieved 72.7% (70.8–74.6%) external-test accuracy. Attention map analysis revealed that the imaging features used by the no-code deep-learning models were similar to those used by endoscopists during visual inspection. Conclusion: No-code deep-learning tools allow for the rapid development of models with high accuracy for predicting colorectal polyp histology.
Collapse
|
13
|
Kim HJ, Gong EJ, Bang CS, Lee JJ, Suk KT, Baik GH. Computer-Aided Diagnosis of Gastrointestinal Protruded Lesions Using Wireless Capsule Endoscopy: A Systematic Review and Diagnostic Test Accuracy Meta-Analysis. J Pers Med 2022; 12:jpm12040644. [PMID: 35455760 PMCID: PMC9029411 DOI: 10.3390/jpm12040644] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 04/14/2022] [Accepted: 04/14/2022] [Indexed: 12/13/2022] Open
Abstract
Background: Wireless capsule endoscopy allows the identification of small intestinal protruded lesions, such as polyps, tumors, or venous structures. However, reading wireless capsule endoscopy images or movies is time-consuming, and minute lesions are easy to miss. Computer-aided diagnosis (CAD) has been applied to improve the efficacy of the reading process of wireless capsule endoscopy images or movies. However, there are no studies that systematically determine the performance of CAD models in diagnosing gastrointestinal protruded lesions. Objective: The aim of this study was to evaluate the diagnostic performance of CAD models for gastrointestinal protruded lesions using wireless capsule endoscopic images. Methods: Core databases were searched for studies based on CAD models for the diagnosis of gastrointestinal protruded lesions using wireless capsule endoscopy, and data on diagnostic performance were presented. A systematic review and diagnostic test accuracy meta-analysis were performed. Results: Twelve studies were included. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of protruded lesions were 0.95 (95% confidence interval, 0.93–0.97), 0.89 (0.84–0.92), 0.91 (0.86–0.94), and 74 (43–126), respectively. Subgroup analyses showed robust results. Meta-regression found no source of heterogeneity. Publication bias was not detected. Conclusion: CAD models showed high performance for the optical diagnosis of gastrointestinal protruded lesions based on wireless capsule endoscopy.
Collapse
Affiliation(s)
- Hye Jin Kim
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
| | - Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon 24253, Korea
- Correspondence: ; Tel.: +82-33-240-5821; Fax: +82-33-241-8064
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon 24253, Korea
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Ki Tae Suk
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| |
Collapse
|
14
|
Xie F, Zhang K, Li F, Ma G, Ni Y, Zhang W, Wang J, Li Y. Diagnostic accuracy of convolutional neural network-based endoscopic image analysis in diagnosing gastric cancer and predicting its invasion depth: a systematic review and meta-analysis. Gastrointest Endosc 2022; 95:599-609.e7. [PMID: 34979114 DOI: 10.1016/j.gie.2021.12.021] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Accepted: 12/25/2021] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS This study aimed to evaluate the accuracy and effectiveness of the convolutional neural network (CNN) in diagnosing gastric cancer and predicting the invasion depth of gastric cancer and to compare the performance of the CNN with that of endoscopists. METHODS PubMed, Embase, Web of Science, and gray literature were searched until July 23, 2021 for studies that assessed the diagnostic accuracy of CNN-assisted examinations for gastric cancer or the invasion depth of gastric cancer. Studies meeting inclusion criteria were included in the systematic review and meta-analysis. RESULTS Seventeen studies comprising 51,446 images and 174 videos of 5539 patients were included. The pooled sensitivity, specificity, positive likelihood ratio (LR+), negative likelihood ratio (LR-), and area under the curve (AUC) of the CNN for diagnosing gastric cancer were 89% (95% confidence interval [CI], 85-93), 93% (95% CI, 88-97), 13.4 (95% CI, 7.3-25.5), .11 (95% CI, .07-.17), and .94 (95% CI, .91-.98), respectively. The performance of the CNN in diagnosing gastric cancer was not significantly different from that of expert endoscopists (.95 vs .90, P > .05) and was better than that of overall endoscopists (experts and nonexperts) (.95 vs .87, P < .05). The pooled sensitivity, specificity, LR+, LR-, and AUC of the CNN for predicting the invasion depth of gastric cancer were 82% (95% CI, 78-85), 90% (95% CI, 82-95), 8.4 (95% CI, 4.2-16.8), .20 (95% CI, .16-.26), and .90 (95% CI, .87-.93), respectively. CONCLUSIONS The CNN is highly accurate in diagnosing gastric cancer and predicting the invasion depth of gastric cancer. The performance of the CNN in diagnosing gastric cancer is not significantly different from that of expert endoscopists. Studies of the real-time performance of the CNN for gastric cancer diagnosis are needed to confirm these findings.
Collapse
Affiliation(s)
- Fang Xie
- School of Nursing, Jilin University, Changchun, Jilin, China
| | - Keqiang Zhang
- Second Hospital of Jilin University, Changchun, Jilin, China
| | - Feng Li
- School of Nursing, Jilin University, Changchun, Jilin, China
| | - Guorong Ma
- College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China
| | - Yuanyuan Ni
- School of Nursing, Jilin University, Changchun, Jilin, China
| | - Wei Zhang
- School of Nursing, Jilin University, Changchun, Jilin, China
| | - Junchao Wang
- School of Nursing, Jilin University, Changchun, Jilin, China
| | - Yuewei Li
- School of Nursing, Jilin University, Changchun, Jilin, China
| |
Collapse
|
15
|
Zhuang H, Bao A, Tan Y, Wang H, Xie Q, Qiu M, Xiong W, Liao F. Application and prospect of artificial intelligence in digestive endoscopy. Expert Rev Gastroenterol Hepatol 2022; 16:21-31. [PMID: 34937459 DOI: 10.1080/17474124.2022.2020646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
INTRODUCTION With the progress of science and technology, artificial intelligence represented by deep learning has gradually begun to be applied in the medical field. Artificial intelligence has been applied to benign gastrointestinal lesions, tumors, early cancer, inflammatory bowel disease, gallbladder, pancreas, and other diseases. This review summarizes the latest research results on artificial intelligence in digestive endoscopy and discusses the prospect of artificial intelligence in digestive system diseases. AREAS COVERED We retrieved relevant documents on artificial intelligence in digestive tract diseases from PubMed and Medline. This review elaborates on the knowledge of computer-aided diagnosis in digestive endoscopy. EXPERT OPINION Artificial intelligence significantly improves diagnostic accuracy, reduces physicians' workload, and provides a shred of evidence for clinical diagnosis and treatment. Shortly, artificial intelligence will have high application value in the field of medicine.
Collapse
Affiliation(s)
- Huangming Zhuang
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Anyu Bao
- Clinical Laboratory, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Yulin Tan
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Hanyu Wang
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Qingfang Xie
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Meiqi Qiu
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Wanli Xiong
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Fei Liao
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| |
Collapse
|
16
|
Bang CS, Lee JJ, Baik GH. Computer-Aided Diagnosis of Gastrointestinal Ulcer and Hemorrhage Using Wireless Capsule Endoscopy: Systematic Review and Diagnostic Test Accuracy Meta-analysis. J Med Internet Res 2021; 23:e33267. [PMID: 34904949 PMCID: PMC8715364 DOI: 10.2196/33267] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 10/10/2021] [Accepted: 10/13/2021] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Interpretation of capsule endoscopy images or movies is operator-dependent and time-consuming. As a result, computer-aided diagnosis (CAD) has been applied to enhance the efficacy and accuracy of the review process. Two previous meta-analyses reported the diagnostic performance of CAD models for gastrointestinal ulcers or hemorrhage in capsule endoscopy. However, insufficient systematic reviews have been conducted, which cannot determine the real diagnostic validity of CAD models. OBJECTIVE To evaluate the diagnostic test accuracy of CAD models for gastrointestinal ulcers or hemorrhage using wireless capsule endoscopic images. METHODS We conducted core databases searching for studies based on CAD models for the diagnosis of ulcers or hemorrhage using capsule endoscopy and presenting data on diagnostic performance. Systematic review and diagnostic test accuracy meta-analysis were performed. RESULTS Overall, 39 studies were included. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of ulcers (or erosions) were .97 (95% confidence interval, .95-.98), .93 (.89-.95), .92 (.89-.94), and 138 (79-243), respectively. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of hemorrhage (or angioectasia) were .99 (.98-.99), .96 (.94-0.97), .97 (.95-.99), and 888 (343-2303), respectively. Subgroup analyses showed robust results. Meta-regression showed that published year, number of training images, and target disease (ulcers vs erosions, hemorrhage vs angioectasia) was found to be the source of heterogeneity. No publication bias was detected. CONCLUSIONS CAD models showed high performance for the optical diagnosis of gastrointestinal ulcer and hemorrhage in wireless capsule endoscopy.
Collapse
Affiliation(s)
- Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea.,Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Republic of Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Republic of Korea.,Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea
| |
Collapse
|
17
|
Bang CS. Artificial Intelligence in the Analysis of Upper Gastrointestinal Disorders. THE KOREAN JOURNAL OF HELICOBACTER AND UPPER GASTROINTESTINAL RESEARCH 2021. [DOI: 10.7704/kjhugr.2021.0030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
In the past, conventional machine learning was applied to analyze tabulated medical data while deep learning was applied to study afflictions such as gastrointestinal disorders. Neural networks were used to detect, classify, and delineate various images of lesions because the local feature selection and optimization of the deep learning model enabled accurate image analysis. With the accumulation of medical records, the evolution of computational power and graphics processing units, and the widespread use of open-source libraries in large-scale machine learning processes, medical artificial intelligence (AI) is overcoming its limitations. While early studies prioritized the automatic diagnosis of cancer or pre-cancerous lesions, the current expanded scope of AI includes benign lesions, quality control, and machine learning analysis of big data. However, the limited commercialization of medical AI and the need to justify its application in each field of research are restricting factors. Modeling assumes that observations follow certain statistical rules, and external validation checks whether assumption is correct or generalizable. Therefore, unused data are essential in the training or internal testing process to validate the performance of the established AI models. This article summarizes the studies on the application of AI models in upper gastrointestinal disorders. The current limitations and the perspectives on future development have also been discussed.
Collapse
|
18
|
El-Nakeep S, El-Nakeep M. Artificial intelligence for cancer detection in upper gastrointestinal endoscopy, current status, and future aspirations. Artif Intell Gastroenterol 2021; 2:124-132. [DOI: 10.35712/aig.v2.i5.124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 06/26/2021] [Accepted: 09/02/2021] [Indexed: 02/06/2023] Open
Abstract
This minireview discusses the benefits and pitfalls of machine learning, and artificial intelligence in upper gastrointestinal endoscopy for the detection and characterization of neoplasms. We have reviewed the literature for relevant publications on the topic using PubMed, IEEE, Science Direct, and Google Scholar databases. We discussed the phases of machine learning and the importance of advanced imaging techniques in upper gastrointestinal endoscopy and its association with artificial intelligence.
Collapse
Affiliation(s)
- Sarah El-Nakeep
- Gastroenterology and Hepatology Unit, Internal Medicine Department, Faculty of Medicine, AinShams University, Cairo 11591, Egypt
| | - Mohamed El-Nakeep
- Master of Science in Electrical Engineering "Electronics and Communications", Electronics and Electrical Engineering Department, Faculty of Engineering, Ain Shams University, Cairo 11736, Egypt
- Bachelor of Science in Electronics and Electrical Communications, Electronics and Communications and Computers Department, Faculty of Engineering, Helwan University, Cairo 11736, Egypt
| |
Collapse
|