1
|
Li N, Yang J, Li X, Shi Y, Wang K. Accuracy of artificial intelligence-assisted endoscopy in the diagnosis of gastric intestinal metaplasia: A systematic review and meta-analysis. PLoS One 2024; 19:e0303421. [PMID: 38743709 PMCID: PMC11093381 DOI: 10.1371/journal.pone.0303421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 04/25/2024] [Indexed: 05/16/2024] Open
Abstract
BACKGROUND AND AIMS Gastric intestinal metaplasia is a precancerous disease, and a timely diagnosis is essential to delay or halt cancer progression. Artificial intelligence (AI) has found widespread application in the field of disease diagnosis. This study aimed to conduct a comprehensive evaluation of AI's diagnostic accuracy in detecting gastric intestinal metaplasia in endoscopy, compare it to endoscopists' ability, and explore the main factors affecting AI's performance. METHODS The study followed the PRISMA-DTA guidelines, and the PubMed, Embase, Web of Science, Cochrane, and IEEE Xplore databases were searched to include relevant studies published by October 2023. We extracted the key features and experimental data of each study and combined the sensitivity and specificity metrics by meta-analysis. We then compared the diagnostic ability of the AI versus the endoscopists using the same test data. RESULTS Twelve studies with 11,173 patients were included, demonstrating AI models' efficacy in diagnosing gastric intestinal metaplasia. The meta-analysis yielded a pooled sensitivity of 94% (95% confidence interval: 0.92-0.96) and specificity of 93% (95% confidence interval: 0.89-0.95). The combined area under the receiver operating characteristics curve was 0.97. The results of meta-regression and subgroup analysis showed that factors such as study design, endoscopy type, number of training images, and algorithm had a significant effect on the diagnostic performance of AI. The AI exhibited a higher diagnostic capacity than endoscopists (sensitivity: 95% vs. 79%). CONCLUSIONS AI-aided diagnosis of gastric intestinal metaplasia using endoscopy showed high performance and clinical diagnostic value. However, further prospective studies are required to validate these findings.
Collapse
Affiliation(s)
- Na Li
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Jian Yang
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Xiaodong Li
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Yanting Shi
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Kunhong Wang
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| |
Collapse
|
2
|
Zhang K, Wang H, Cheng Y, Liu H, Gong Q, Zeng Q, Zhang T, Wei G, Wei Z, Chen D. Early gastric cancer detection and lesion segmentation based on deep learning and gastroscopic images. Sci Rep 2024; 14:7847. [PMID: 38570595 PMCID: PMC10991264 DOI: 10.1038/s41598-024-58361-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 03/28/2024] [Indexed: 04/05/2024] Open
Abstract
Gastric cancer is a highly prevalent disease that poses a serious threat to public health. In clinical practice, gastroscopy is frequently used by medical practitioners to screen for gastric cancer. However, the symptoms of gastric cancer at different stages of advancement vary significantly, particularly in the case of early gastric cancer (EGC). The manifestations of EGC are often indistinct, leading to a detection rate of less than 10%. In recent years, researchers have focused on leveraging deep learning algorithms to assist medical professionals in detecting EGC and thereby improve detection rates. To enhance the ability of deep learning to detect EGC and segment lesions in gastroscopic images, an Improved Mask R-CNN (IMR-CNN) model was proposed. This model incorporates a "Bi-directional feature extraction and fusion module" and a "Purification module for feature channel and space" based on the Mask R-CNN (MR-CNN). Our study includes a dataset of 1120 images of EGC for training and validation of the models. The experimental results indicate that the IMR-CNN model outperforms the original MR-CNN model, with Precision, Recall, Accuracy, Specificity and F1-Score values of 92.9%, 95.3%, 93.9%, 92.5% and 94.1%, respectively. Therefore, our proposed IMR-CNN model has superior detection and lesion segmentation capabilities and can effectively aid doctors in diagnosing EGC from gastroscopic images.
Collapse
Affiliation(s)
- Kezhi Zhang
- Guangxi Key Laboratory of Information Functional Materials and Intelligent Information Processing, School of Physics and Electronics, Nanning Normal University, 175 Mingxiu East Road, Nanning, 530001, Guangxi, China
| | - Haibao Wang
- Guangxi Key Laboratory of Information Functional Materials and Intelligent Information Processing, School of Physics and Electronics, Nanning Normal University, 175 Mingxiu East Road, Nanning, 530001, Guangxi, China
| | - Yaru Cheng
- Department of Gastroenterology, Shandong Second Provincial General Hospital, 4 Duan Xing West Road, Jinan, 250022, Shandong, China
| | - Hongyan Liu
- Department of Gastroenterology, Shandong Second Provincial General Hospital, 4 Duan Xing West Road, Jinan, 250022, Shandong, China
| | - Qi Gong
- Department of Gastroenterology, Shandong Second Provincial General Hospital, 4 Duan Xing West Road, Jinan, 250022, Shandong, China
| | - Qian Zeng
- Guangxi Key Laboratory of Information Functional Materials and Intelligent Information Processing, School of Physics and Electronics, Nanning Normal University, 175 Mingxiu East Road, Nanning, 530001, Guangxi, China
| | - Tao Zhang
- Guangxi Key Laboratory of Information Functional Materials and Intelligent Information Processing, School of Physics and Electronics, Nanning Normal University, 175 Mingxiu East Road, Nanning, 530001, Guangxi, China
| | - Guoqiang Wei
- Guangxi Key Laboratory of Information Functional Materials and Intelligent Information Processing, School of Physics and Electronics, Nanning Normal University, 175 Mingxiu East Road, Nanning, 530001, Guangxi, China.
- School of Electronic Engineering, Hunan College of Information, Changsha, 410200, Hunan, China.
| | - Zhi Wei
- Department of Gastroenterology, Shandong Second Provincial General Hospital, 4 Duan Xing West Road, Jinan, 250022, Shandong, China.
| | - Dong Chen
- Guangxi Key Laboratory of Information Functional Materials and Intelligent Information Processing, School of Physics and Electronics, Nanning Normal University, 175 Mingxiu East Road, Nanning, 530001, Guangxi, China.
| |
Collapse
|
3
|
Ueyama H, Hirasawa T, Yano T, Doyama H, Isomoto H, Yagi K, Kawai T, Yao K. Advanced diagnostic endoscopy in the upper gastrointestinal tract: Review of the Japan Gastroenterological Endoscopic Society core sessions. DEN OPEN 2024; 4:e359. [PMID: 38601269 PMCID: PMC11004903 DOI: 10.1002/deo2.359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/08/2024] [Accepted: 03/19/2024] [Indexed: 04/12/2024]
Abstract
The Japan Gastroenterological Endoscopy Society (JGES) held four serial symposia between 2021 and 2022 on state-of-the-art issues related to advanced diagnostic endoscopy of the upper gastrointestinal tract. This review summarizes the four core sessions and presents them as a conference report. Eleven studies were discussed in the 101st JGES Core Session, which addressed the challenges and prospects of upper gastroenterological endoscopy. Ten studies were also explored in the 102nd JGES Core Session on advanced upper gastrointestinal endoscopic diagnosis for decision-making regarding therapeutic strategies. Moreover, eight studies were presented during the 103rd JGES Core Session on the development and evaluation of endoscopic artificial intelligence in the field of upper gastrointestinal endoscopy. Twelve studies were also discussed in the 104th JGES Core Session, which focused on the evidence and new developments related to the upper gastrointestinal tract. The endoscopic diagnosis of upper gastrointestinal diseases using image-enhanced endoscopy and AI is one of the most recent topics and has received considerable attention. These four core sessions enabled us to grasp the current state-of-the-art in upper gastrointestinal endoscopic diagnostics and identify future challenges. Based on these studies, we hope that an endoscopic diagnostic system useful in clinical practice is established for each field of upper gastrointestinal endoscopy.
Collapse
Affiliation(s)
- Hiroya Ueyama
- Department of GastroenterologyJuntendo University School of MedicineTokyoJapan
| | - Toshiaki Hirasawa
- Department of GastroenterologyCancer Institute HospitalJapanese Foundation for Cancer ResearchTokyoJapan
| | - Tomonori Yano
- Department of Gastroenterology, Endoscopy DivisionNational Cancer Center Hospital EastChibaJapan
| | - Hisashi Doyama
- Department of GastroenterologyIshikawa Prefectural Central HospitalIshikawaJapan
| | - Hajime Isomoto
- Division of Gastroenterology and NephrologyTottori University Faculty of MedicineTottoriJapan
| | - Kazuyoshi Yagi
- Department of GastroenterologyNiigata University Local Medical Care Education CenterUonuma Kikan HospitalNiigataJapan
| | - Takashi7 Kawai
- Department of Gastroenterological EndoscopyTokyo Medical University HospitalTokyoJapan
| | - Kenshi Yao
- Department of EndoscopyFukuoka University Chikushi HospitalFukuokaJapan
| |
Collapse
|
4
|
Takeda T, Asaoka D, Ueyama H, Abe D, Suzuki M, Inami Y, Uemura Y, Yamamoto M, Iwano T, Uchida R, Utsunomiya H, Oki S, Suzuki N, Ikeda A, Akazawa Y, Matsumoto K, Ueda K, Hojo M, Nojiri S, Tada T, Nagahara A. Development of an Artificial Intelligence Diagnostic System Using Linked Color Imaging for Barrett's Esophagus. J Clin Med 2024; 13:1990. [PMID: 38610762 PMCID: PMC11012507 DOI: 10.3390/jcm13071990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 03/21/2024] [Accepted: 03/26/2024] [Indexed: 04/14/2024] Open
Abstract
Background: Barrett's esophagus and esophageal adenocarcinoma cases are increasing as gastroesophageal reflux disease increases. Using artificial intelligence (AI) and linked color imaging (LCI), our aim was to establish a method of diagnosis for short-segment Barrett's esophagus (SSBE). Methods: We retrospectively selected 624 consecutive patients in total at our hospital, treated between May 2017 and March 2020, who experienced an esophagogastroduodenoscopy with white light imaging (WLI) and LCI. Images were randomly chosen as data for learning from WLI: 542 (SSBE+/- 348/194) of 696 (SSBE+/- 444/252); and LCI: 643 (SSBE+/- 446/197) of 805 (SSBE+/- 543/262). Using a Vision Transformer (Vit-B/16-384) to diagnose SSBE, we established two AI systems for WLI and LCI. Finally, 126 WLI (SSBE+/- 77/49) and 137 LCI (SSBE+/- 81/56) images were used for verification purposes. The accuracy of six endoscopists in making diagnoses was compared to that of AI. Results: Study participants were 68.2 ± 12.3 years, M/F 330/294, SSBE+/- 409/215. The accuracy/sensitivity/specificity (%) of AI were 84.1/89.6/75.5 for WLI and 90.5/90.1/91.1/for LCI, and those of experts and trainees were 88.6/88.7/88.4, 85.7/87.0/83.7 for WLI and 93.4/92.6/94.6, 84.7/88.1/79.8 for LCI, respectively. Conclusions: Using AI to diagnose SSBE was similar in accuracy to using a specialist. Our finding may aid the diagnosis of SSBE in the clinic.
Collapse
Affiliation(s)
- Tsutomu Takeda
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Daisuke Asaoka
- Department of Gastroenterology, Juntendo Tokyo Koto Geriatric Medical Center, Tokyo 136-0075, Japan; (D.A.); (M.S.); (Y.I.)
| | - Hiroya Ueyama
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Daiki Abe
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Maiko Suzuki
- Department of Gastroenterology, Juntendo Tokyo Koto Geriatric Medical Center, Tokyo 136-0075, Japan; (D.A.); (M.S.); (Y.I.)
| | - Yoshihiro Inami
- Department of Gastroenterology, Juntendo Tokyo Koto Geriatric Medical Center, Tokyo 136-0075, Japan; (D.A.); (M.S.); (Y.I.)
| | - Yasuko Uemura
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Momoko Yamamoto
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Tomoyo Iwano
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Ryota Uchida
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Hisanori Utsunomiya
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Shotaro Oki
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Nobuyuki Suzuki
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Atsushi Ikeda
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Yoichi Akazawa
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Kohei Matsumoto
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Kumiko Ueda
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Mariko Hojo
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| | - Shuko Nojiri
- Department of Medical Technology Innovation Center, Juntendo University School of Medicine, Tokyo 113-8421, Japan;
| | | | - Akihito Nagahara
- Department of Gastroenterology, Juntendo University School of Medicine, Tokyo 113-8421, Japan; (H.U.); (D.A.); (Y.U.); (M.Y.); (T.I.); (R.U.); (H.U.); (S.O.); (N.S.); (A.I.); (Y.A.); (K.M.); (K.U.); (M.H.); (A.N.)
| |
Collapse
|
5
|
Wang J, Shao M, Hu H, Xiao W, Cheng G, Yang G, Ji H, Yu S, Wan J, Xie Z, Xu M. Convolutional neural network applied to preoperative venous-phase CT images predicts risk category in patients with gastric gastrointestinal stromal tumors. BMC Cancer 2024; 24:280. [PMID: 38429653 PMCID: PMC10908217 DOI: 10.1186/s12885-024-11962-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 02/05/2024] [Indexed: 03/03/2024] Open
Abstract
OBJECTIVE The risk category of gastric gastrointestinal stromal tumors (GISTs) are closely related to the surgical method, the scope of resection, and the need for preoperative chemotherapy. We aimed to develop and validate convolutional neural network (CNN) models based on preoperative venous-phase CT images to predict the risk category of gastric GISTs. METHOD A total of 425 patients pathologically diagnosed with gastric GISTs at the authors' medical centers between January 2012 and July 2021 were split into a training set (154, 84, and 59 with very low/low, intermediate, and high-risk, respectively) and a validation set (67, 35, and 26, respectively). Three CNN models were constructed by obtaining the upper and lower 1, 4, and 7 layers of the maximum tumour mask slice based on venous-phase CT Images and models of CNN_layer3, CNN_layer9, and CNN_layer15 established, respectively. The area under the receiver operating characteristics curve (AUROC) and the Obuchowski index were calculated to compare the diagnostic performance of the CNN models. RESULTS In the validation set, CNN_layer3, CNN_layer9, and CNN_layer15 had AUROCs of 0.89, 0.90, and 0.90, respectively, for low-risk gastric GISTs; 0.82, 0.83, and 0.83 for intermediate-risk gastric GISTs; and 0.86, 0.86, and 0.85 for high-risk gastric GISTs. In the validation dataset, CNN_layer3 (Obuchowski index, 0.871) provided similar performance than CNN_layer9 and CNN_layer15 (Obuchowski index, 0.875 and 0.873, respectively) in prediction of the gastric GIST risk category (All P >.05). CONCLUSIONS The CNN based on preoperative venous-phase CT images showed good performance for predicting the risk category of gastric GISTs.
Collapse
Affiliation(s)
- Jian Wang
- Department of Radiology, Tongde Hospital of Zhejiang Province, Hangzhou, Zhejiang, China
- Department of radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, Zhejiang, China
| | - Meihua Shao
- Department of Radiology, Tongde Hospital of Zhejiang Province, Hangzhou, Zhejiang, China
| | - Hongjie Hu
- Department of Radiology, The Sir Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Wenbo Xiao
- Department of radiology,The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | | | - Guangzhao Yang
- Department of Radiology, Tongde Hospital of Zhejiang Province, Hangzhou, Zhejiang, China
| | - Hongli Ji
- Jianpei Technology, Hangzhou, Zhejiang, China
| | - Susu Yu
- Department of radiology,The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Jie Wan
- Jianpei Technology, Hangzhou, Zhejiang, China
| | - Zongyu Xie
- Department of Radiology, The First Affliated Hospital of Bengbu Medical University, Bengbu, Anhui, China
| | - Maosheng Xu
- Department of radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, Zhejiang, China.
| |
Collapse
|
6
|
Ma H, Ma X, Yang C, Niu Q, Gao T, Liu C, Chen Y. Development and evaluation of a program based on a generative pre-trained transformer model from a public natural language processing platform for efficiency enhancement in post-procedural quality control of esophageal endoscopic submucosal dissection. Surg Endosc 2024; 38:1264-1272. [PMID: 38097750 DOI: 10.1007/s00464-023-10620-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 11/28/2023] [Indexed: 02/23/2024]
Abstract
BACKGROUND Post-procedural quality control of endoscopic submucosal dissection (ESD) is emphasized in guidelines. However, this process can be tedious and time-consuming. Recently, a pre-training model called generative pre-trained transformer (GPT) on a public natural language processing platform has emerged and garnered significant attention, whose capabilities align well with the post-procedural quality control process and have the potential to streamline it. Therefore, we developed a simple program utilizing this platform and evaluated its performance. METHODS Esophageal ESDs were retrospectively included. The manual quality control process was performed and act as reference standard. GPT's prompt was optimized through multiple iterations. A Python program was developed to automatically submit prompt with pathological report of each ESD procedure and collect quality control information provided by GPT. Its performance on quality control was evaluated with accuracy, precision, recall, and F-1 score. RESULTS 165 cases were involved into the dataset, of which 5 were utilized as the prompt optimization dataset and 160 as the validation dataset. Definitive prompt was achieved through seven iterations. Time spent on the validation dataset by GPT was 13.47 ± 2.43 min. Accuracies of pathological diagnosis, invasion depth, horizontal margin, vertical margin, vascular invasion, and lymphatic invasion of the quality control program were (0.940, 0.952) (95% CI), (0.925, 0.945) (95% CI), 0.931, 1.0, and 1.0, respectively. Precisions were (0.965, 0.969) (95% CI), (0.934, 0.954) (95% CI), and 0.957 for pathological diagnosis, invasion depth, and horizontal margin, respectively. Recalls were (0.940, 0.952) (95% CI), (0.925, 0.945) (95% CI), and 0.931 for factors as mentioned, respectively. F1-score were (0.945, 0.957) (95% CI), (0.928, 0.948) (95% CI), and 0.941 for factors as mentioned, respectively. CONCLUSIONS This quality control program was qualified of post-procedural quality control of esophageal ESDs. GPT can be easily applied to this quality control process and reduce workload of the endoscopists.
Collapse
Affiliation(s)
- Huaiyuan Ma
- Department of Gastroenterology and Hepatology, Binzhou Medical University Hospital, Binzhou, 256603, Shandong, China
- Digestive Disease Research Institute of Binzhou Medical University Hospital, Binzhou, Shandong, China
| | - Xingbin Ma
- Department of Gastroenterology and Hepatology, Binzhou Medical University Hospital, Binzhou, 256603, Shandong, China
- Digestive Disease Research Institute of Binzhou Medical University Hospital, Binzhou, Shandong, China
| | - Chunxiao Yang
- Department of Gastroenterology and Hepatology, Binzhou Medical University Hospital, Binzhou, 256603, Shandong, China
- Digestive Disease Research Institute of Binzhou Medical University Hospital, Binzhou, Shandong, China
| | - Qiong Niu
- Department of Gastroenterology and Hepatology, Binzhou Medical University Hospital, Binzhou, 256603, Shandong, China
- Digestive Disease Research Institute of Binzhou Medical University Hospital, Binzhou, Shandong, China
| | - Tao Gao
- Endoscopy Center of Binzhou Medical University Hospital, Binzhou, Shandong, China
| | - Chengxia Liu
- Department of Gastroenterology and Hepatology, Binzhou Medical University Hospital, Binzhou, 256603, Shandong, China.
- Digestive Disease Research Institute of Binzhou Medical University Hospital, Binzhou, Shandong, China.
- Endoscopy Center of Binzhou Medical University Hospital, Binzhou, Shandong, China.
| | - Yan Chen
- Department of Gastroenterology and Hepatology, Binzhou Medical University Hospital, Binzhou, 256603, Shandong, China.
- Digestive Disease Research Institute of Binzhou Medical University Hospital, Binzhou, Shandong, China.
- Endoscopy Center of Binzhou Medical University Hospital, Binzhou, Shandong, China.
| |
Collapse
|
7
|
Gomes RFT, Schmith J, de Figueiredo RM, Freitas SA, Machado GN, Romanini J, Almeida JD, Pereira CT, Rodrigues JDA, Carrard VC. Convolutional neural network misclassification analysis in oral lesions: an error evaluation criterion by image characteristics. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 137:243-252. [PMID: 38161085 DOI: 10.1016/j.oooo.2023.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 10/02/2023] [Accepted: 10/04/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVE This retrospective study analyzed the errors generated by a convolutional neural network (CNN) when performing automated classification of oral lesions according to their clinical characteristics, seeking to identify patterns in systemic errors in the intermediate layers of the CNN. STUDY DESIGN A cross-sectional analysis nested in a previous trial in which automated classification by a CNN model of elementary lesions from clinical images of oral lesions was performed. The resulting CNN classification errors formed the dataset for this study. A total of 116 real outputs were identified that diverged from the estimated outputs, representing 7.6% of the total images analyzed by the CNN. RESULTS The discrepancies between the real and estimated outputs were associated with problems relating to image sharpness, resolution, and focus; human errors; and the impact of data augmentation. CONCLUSIONS From qualitative analysis of errors in the process of automated classification of clinical images, it was possible to confirm the impact of image quality, as well as identify the strong impact of the data augmentation process. Knowledge of the factors that models evaluate to make decisions can increase confidence in the high classification potential of CNNs.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Department of Oral Pathology, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil.
| | - Jean Schmith
- Polytechnic School, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil; Technology in Automation and Electronics Laboratory-TECAE Lab, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | - Rodrigo Marques de Figueiredo
- Polytechnic School, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil; Technology in Automation and Electronics Laboratory-TECAE Lab, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | - Samuel Armbrust Freitas
- Department of Applied Computing, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | | | - Juliana Romanini
- Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, Rio Grande do Sul, Brazil
| | - Janete Dias Almeida
- Department of Biosciences and Oral Diagnostics, São Paulo State University, Campus São José dos Campos, São Paulo, Brazil
| | | | - Jonas de Almeida Rodrigues
- Department of Surgery and Orthopaedics, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil
| | - Vinicius Coelho Carrard
- Department of Oral Pathology, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil; TelessaudeRS-UFRGS, Federal University of Rio Grande do Sul, Porto Alegre, Rio Grande do Sul, Brazil; Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, Rio Grande do Sul, Brazil
| |
Collapse
|
8
|
Zeng X, Yang L, Dong Z, Gong D, Li Y, Deng Y, Du H, Li X, Xu Y, Luo C, Wang J, Tao X, Zhang C, Zhu Y, Jiang R, Yao L, Wu L, Jin P, Yu H. The effect of incorporating domain knowledge with deep learning in identifying benign and malignant gastric whitish lesions: A retrospective study. J Gastroenterol Hepatol 2024. [PMID: 38414305 DOI: 10.1111/jgh.16525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/22/2024] [Accepted: 02/05/2024] [Indexed: 02/29/2024]
Abstract
BACKGROUND AND AIM Early whitish gastric neoplasms can be easily misdiagnosed; differential diagnosis of gastric whitish lesions remains a challenge. We aim to build a deep learning (DL) model to diagnose whitish gastric neoplasms and explore the effect of adding domain knowledge in model construction. METHODS We collected 4558 images from two institutions to train and test models. We first developed two sole DL models (1 and 2) using supervised and semi-supervised algorithms. Then we selected diagnosis-related features through literature research and developed feature-extraction models to determine features including boundary, surface, roundness, depression, and location. Then predictions of the five feature-extraction models and sole DL model were combined and inputted into seven machine-learning (ML) based fitting-diagnosis models. The optimal model was selected as ENDOANGEL-WD (whitish-diagnosis) and compared with endoscopists. RESULTS Sole DL 2 had higher sensitivity (83.12% vs 68.67%, Bonferroni adjusted P = 0.024) than sole DL 1. Adding domain knowledge, the decision tree performed best among the seven ML models, achieving higher specificity than DL 1 (84.38% vs 72.27%, Bonferroni adjusted P < 0.05) and higher accuracy than DL 2 (80.47%, Bonferroni adjusted P < 0.001) and was selected as ENDOANGEL-WD. ENDOANGEL-WD showed better accuracy compared with 10 endoscopists (75.70%, P < 0.001). CONCLUSIONS We developed a novel system ENDOANGEL-WD combining domain knowledge and traditional DL to detect gastric whitish neoplasms. Adding domain knowledge improved the performance of traditional DL, which provided a novel solution for establishing diagnostic models for other rare diseases potentially.
Collapse
Affiliation(s)
- Xiaoquan Zeng
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Lang Yang
- Department of Gastroenterology, The Seventh Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Zehua Dong
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Dexin Gong
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Yanxia Li
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Yunchao Deng
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Hongliu Du
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Xun Li
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Youming Xu
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Chaijie Luo
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Junxiao Wang
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Xiao Tao
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Chenxia Zhang
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Yijie Zhu
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Ruiqing Jiang
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Liwen Yao
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Lianlian Wu
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| | - Peng Jin
- Department of Gastroenterology, The Seventh Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Honggang Yu
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial lntelligence Endoscopy Interventional Treatment of Hubei Province, Wuhan, China
| |
Collapse
|
9
|
Shi Y, Fan H, Li L, Hou Y, Qian F, Zhuang M, Miao B, Fei S. The value of machine learning approaches in the diagnosis of early gastric cancer: a systematic review and meta-analysis. World J Surg Oncol 2024; 22:40. [PMID: 38297303 PMCID: PMC10832162 DOI: 10.1186/s12957-024-03321-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 01/23/2024] [Indexed: 02/02/2024] Open
Abstract
BACKGROUND The application of machine learning (ML) for identifying early gastric cancer (EGC) has drawn increasing attention. However, there lacks evidence-based support for its specific diagnostic performance. Hence, this systematic review and meta-analysis was implemented to assess the performance of image-based ML in EGC diagnosis. METHODS We performed a comprehensive electronic search in PubMed, Embase, Cochrane Library, and Web of Science up to September 25, 2022. QUADAS-2 was selected to judge the risk of bias of included articles. We did the meta-analysis using a bivariant mixed-effect model. Sensitivity analysis and heterogeneity test were performed. RESULTS Twenty-one articles were enrolled. The sensitivity (SEN), specificity (SPE), and SROC of ML-based models were 0.91 (95% CI: 0.87-0.94), 0.85 (95% CI: 0.81-0.89), and 0.94 (95% CI: 0.39-1.00) in the training set and 0.90 (95% CI: 0.86-0.93), 0.90 (95% CI: 0.86-0.92), and 0.96 (95% CI: 0.19-1.00) in the validation set. The SEN, SPE, and SROC of EGC diagnosis by non-specialist clinicians were 0.64 (95% CI: 0.56-0.71), 0.84 (95% CI: 0.77-0.89), and 0.80 (95% CI: 0.29-0.97), and those by specialist clinicians were 0.80 (95% CI: 0.74-0.85), 0.88 (95% CI: 0.85-0.91), and 0.91 (95% CI: 0.37-0.99). With the assistance of ML models, the SEN of non-specialist physicians in the diagnosis of EGC was significantly improved (0.76 vs 0.64). CONCLUSION ML-based diagnostic models have greater performance in the identification of EGC. The diagnostic accuracy of non-specialist clinicians can be improved to the level of the specialists with the assistance of ML models. The results suggest that ML models can better assist less experienced clinicians in diagnosing EGC under endoscopy and have broad clinical application value.
Collapse
Affiliation(s)
- Yiheng Shi
- Department of Gastroenterology, The Affiliated Hospital of Xuzhou Medical University, 99 West Huaihai Road, Jiangsu Province, 221002, Xuzhou, China
- First Clinical Medical College, Xuzhou Medical University, Jiangsu Province, 221002, Xuzhou, China
| | - Haohan Fan
- First Clinical Medical College, Xuzhou Medical University, Jiangsu Province, 221002, Xuzhou, China
| | - Li Li
- Department of Gastroenterology, The Affiliated Hospital of Xuzhou Medical University, 99 West Huaihai Road, Jiangsu Province, 221002, Xuzhou, China
- Key Laboratory of Gastrointestinal Endoscopy, Xuzhou Medical University, Jiangsu Province, 221002, Xuzhou, China
| | - Yaqi Hou
- College of Nursing, Yangzhou University, Yangzhou, 225009, China
| | - Feifei Qian
- Department of Gastroenterology, The Affiliated Hospital of Xuzhou Medical University, 99 West Huaihai Road, Jiangsu Province, 221002, Xuzhou, China
- First Clinical Medical College, Xuzhou Medical University, Jiangsu Province, 221002, Xuzhou, China
| | - Mengting Zhuang
- Department of Gastroenterology, The Affiliated Hospital of Xuzhou Medical University, 99 West Huaihai Road, Jiangsu Province, 221002, Xuzhou, China
- First Clinical Medical College, Xuzhou Medical University, Jiangsu Province, 221002, Xuzhou, China
| | - Bei Miao
- Department of Gastroenterology, The Affiliated Hospital of Xuzhou Medical University, 99 West Huaihai Road, Jiangsu Province, 221002, Xuzhou, China.
- Institute of Digestive Diseases, Xuzhou Medical University, 84 West Huaihai Road, Jiangsu Province, 221002, Xuzhou, China.
| | - Sujuan Fei
- Department of Gastroenterology, The Affiliated Hospital of Xuzhou Medical University, 99 West Huaihai Road, Jiangsu Province, 221002, Xuzhou, China.
- Key Laboratory of Gastrointestinal Endoscopy, Xuzhou Medical University, Jiangsu Province, 221002, Xuzhou, China.
| |
Collapse
|
10
|
Hartoonian S, Hosseini M, Yousefi I, Mahdian M, Ghazizadeh Ahsaie M. Applications of artificial intelligence in dentomaxillofacial imaging-a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2024:S2212-4403(23)01566-3. [PMID: 38637235 DOI: 10.1016/j.oooo.2023.12.790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/02/2023] [Accepted: 12/22/2023] [Indexed: 04/20/2024]
Abstract
BACKGROUND Artificial intelligence (AI) technology has been increasingly developed in oral and maxillofacial imaging. The aim of this systematic review was to assess the applications and performance of the developed algorithms in different dentomaxillofacial imaging modalities. STUDY DESIGN A systematic search of PubMed and Scopus databases was performed. The search strategy was set as a combination of the following keywords: "Artificial Intelligence," "Machine Learning," "Deep Learning," "Neural Networks," "Head and Neck Imaging," and "Maxillofacial Imaging." Full-text screening and data extraction were independently conducted by two independent reviewers; any mismatch was resolved by discussion. The risk of bias was assessed by one reviewer and validated by another. RESULTS The search returned a total of 3,392 articles. After careful evaluation of the titles, abstracts, and full texts, a total number of 194 articles were included. Most studies focused on AI applications for tooth and implant classification and identification, 3-dimensional cephalometric landmark detection, lesion detection (periapical, jaws, and bone), and osteoporosis detection. CONCLUSION Despite the AI models' limitations, they showed promising results. Further studies are needed to explore specific applications and real-world scenarios before confidently integrating these models into dental practice.
Collapse
Affiliation(s)
- Serlie Hartoonian
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Matine Hosseini
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Iman Yousefi
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mina Mahdian
- Department of Prosthodontics and Digital Technology, Stony Brook University School of Dental Medicine, Stony Brook University, Stony Brook, NY, USA
| | - Mitra Ghazizadeh Ahsaie
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
11
|
Horiuchi Y, Hirasawa T, Fujisaki J. Application of artificial intelligence for diagnosis of early gastric cancer based on magnifying endoscopy with narrow-band imaging. Clin Endosc 2024; 57:11-17. [PMID: 38178327 PMCID: PMC10834286 DOI: 10.5946/ce.2023.173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/14/2023] [Accepted: 08/16/2023] [Indexed: 01/06/2024] Open
Abstract
Although magnifying endoscopy with narrow-band imaging is the standard diagnostic test for gastric cancer, diagnosing gastric cancer using this technology requires considerable skill. Artificial intelligence has superior image recognition, and its usefulness in endoscopic image diagnosis has been reported in many cases. The diagnostic performance (accuracy, sensitivity, and specificity) of artificial intelligence using magnifying endoscopy with narrow band still images and videos for gastric cancer was higher than that of expert endoscopists, suggesting the usefulness of artificial intelligence in diagnosing gastric cancer. Histological diagnosis of gastric cancer using artificial intelligence is also promising. However, previous studies on the use of artificial intelligence to diagnose gastric cancer were small-scale; thus, large-scale studies are necessary to examine whether a high diagnostic performance can be achieved. In addition, the diagnosis of gastric cancer using artificial intelligence has not yet become widespread in clinical practice, and further research is necessary. Therefore, in the future, artificial intelligence must be further developed as an instrument, and its diagnostic performance is expected to improve with the accumulation of numerous cases nationwide.
Collapse
Affiliation(s)
- Yusuke Horiuchi
- Department of Gastroenterology, Cancer Institute Hospital of Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Toshiaki Hirasawa
- Department of Gastroenterology, Cancer Institute Hospital of Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Junko Fujisaki
- Department of Gastroenterology, Cancer Institute Hospital of Japanese Foundation for Cancer Research, Tokyo, Japan
| |
Collapse
|
12
|
Tian C, Su W, Huang S, Shao B, Li X, Zhang Y, Wang B, Yu X, Li W. Identification of gastric cancer types based on hyperspectral imaging technology. JOURNAL OF BIOPHOTONICS 2024; 17:e202300276. [PMID: 37669431 DOI: 10.1002/jbio.202300276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 08/17/2023] [Accepted: 08/30/2023] [Indexed: 09/07/2023]
Abstract
Gastric cancer is becoming the second biggest cause of death from cancer. Treatment and prognosis of different types of gastric cancer vary greatly. However, the routine pathological examination is limited to the tissue level and is easily affected by subjective factors. In our study, we examined gastric mucosal samples from 50 normal tissue and 90 cancer tissues. Hyperspectral imaging technology was used to obtain spectral information. A two-classification model for normal tissue and cancer tissue identification and a four-classification model for cancer type identification are constructed based on the improved deep residual network (IDRN). The accuracy of the two-classification model and four-classification model are 0.947 and 0.965. Hyperspectral imaging technology was used to extract molecular information to realize real-time diagnosis and accurate typing. The results show that hyperspectral imaging technique has good effect on diagnosis and type differentiation of gastric cancer, which is expected to be used in auxiliary diagnosis and treatment.
Collapse
Affiliation(s)
- Chongxuan Tian
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Wenjing Su
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Sirui Huang
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Bowen Shao
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Xueyi Li
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Yuanbo Zhang
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Bingjie Wang
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Xiaojing Yu
- Department of Dermatology, Qilu Hospital of Shandong University, Jinan, China
| | - Wei Li
- School of Control Science and Engineering, Shandong University, Jinan, China
| |
Collapse
|
13
|
Xin Y, Zhang Q, Liu X, Li B, Mao T, Li X. Application of artificial intelligence in endoscopic gastrointestinal tumors. Front Oncol 2023; 13:1239788. [PMID: 38144533 PMCID: PMC10747923 DOI: 10.3389/fonc.2023.1239788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 11/17/2023] [Indexed: 12/26/2023] Open
Abstract
With an increasing number of patients with gastrointestinal cancer, effective and accurate early diagnostic clinical tools are required provide better health care for patients with gastrointestinal cancer. Recent studies have shown that artificial intelligence (AI) plays an important role in the diagnosis and treatment of patients with gastrointestinal tumors, which not only improves the efficiency of early tumor screening, but also significantly improves the survival rate of patients after treatment. With the aid of efficient learning and judgment abilities of AI, endoscopists can improve the accuracy of diagnosis and treatment through endoscopy and avoid incorrect descriptions or judgments of gastrointestinal lesions. The present article provides an overview of the application status of various artificial intelligence in gastric and colorectal cancers in recent years, and the direction of future research and clinical practice is clarified from a clinical perspective to provide a comprehensive theoretical basis for AI as a promising diagnostic and therapeutic tool for gastrointestinal cancer.
Collapse
Affiliation(s)
| | | | | | | | | | - Xiaoyu Li
- Department of Gastroenterology, The Affiliated Hospital of Qingdao University, Qingdao, China
| |
Collapse
|
14
|
Klang E, Sourosh A, Nadkarni GN, Sharif K, Lahat A. Deep Learning and Gastric Cancer: Systematic Review of AI-Assisted Endoscopy. Diagnostics (Basel) 2023; 13:3613. [PMID: 38132197 PMCID: PMC10742887 DOI: 10.3390/diagnostics13243613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/23/2023] [Accepted: 12/02/2023] [Indexed: 12/23/2023] Open
Abstract
BACKGROUND Gastric cancer (GC), a significant health burden worldwide, is typically diagnosed in the advanced stages due to its non-specific symptoms and complex morphological features. Deep learning (DL) has shown potential for improving and standardizing early GC detection. This systematic review aims to evaluate the current status of DL in pre-malignant, early-stage, and gastric neoplasia analysis. METHODS A comprehensive literature search was conducted in PubMed/MEDLINE for original studies implementing DL algorithms for gastric neoplasia detection using endoscopic images. We adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The focus was on studies providing quantitative diagnostic performance measures and those comparing AI performance with human endoscopists. RESULTS Our review encompasses 42 studies that utilize a variety of DL techniques. The findings demonstrate the utility of DL in GC classification, detection, tumor invasion depth assessment, cancer margin delineation, lesion segmentation, and detection of early-stage and pre-malignant lesions. Notably, DL models frequently matched or outperformed human endoscopists in diagnostic accuracy. However, heterogeneity in DL algorithms, imaging techniques, and study designs precluded a definitive conclusion about the best algorithmic approach. CONCLUSIONS The promise of artificial intelligence in improving and standardizing gastric neoplasia detection, diagnosis, and segmentation is significant. This review is limited by predominantly single-center studies and undisclosed datasets used in AI training, impacting generalizability and demographic representation. Further, retrospective algorithm training may not reflect actual clinical performance, and a lack of model details hinders replication efforts. More research is needed to substantiate these findings, including larger-scale multi-center studies, prospective clinical trials, and comprehensive technical reporting of DL algorithms and datasets, particularly regarding the heterogeneity in DL algorithms and study designs.
Collapse
Affiliation(s)
- Eyal Klang
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
- ARC Innovation Center, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel
| | - Ali Sourosh
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Girish N. Nadkarni
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Kassem Sharif
- Department of Gastroenterology, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel;
| | - Adi Lahat
- Department of Gastroenterology, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel;
| |
Collapse
|
15
|
El-Sayed A, Salman S, Alrubaiy L. The adoption of artificial intelligence assisted endoscopy in the Middle East: challenges and future potential. Transl Gastroenterol Hepatol 2023; 8:42. [PMID: 38021356 PMCID: PMC10643188 DOI: 10.21037/tgh-23-37] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 10/07/2023] [Indexed: 12/01/2023] Open
Abstract
The use of artificial intelligence (AI) in endoscopy has shown immense potential to enhance diagnostic accuracy, streamline procedures, and improve patient outcomes. There are potential uses in every field of endoscopy, from improving adenoma detection rate (ADR) in colonoscopy to reducing read time in capsule endoscopy or minimizing blind spots in gastroscopy. Indeed, some of these systems are already licensed and in commercial use across the world. In the Middle East, where healthcare systems are rapidly evolving, there is a growing interest in adopting AI technologies to revolutionise endoscopic practices. This article provides an overview of the advancements, potential opportunities and challenges associated with the implementation of AI in endoscopy within the Middle East region. Our aim is to contribute to the ongoing dialogue surrounding the implementation of AI in endoscopy and consider some of the factors that are particularly relevant in the Middle Eastern context, including the need to train the models for local populations, cost and training, as well as trying to ensure equity of access for patients. It provides valuable insights for healthcare professionals, policymakers, and researchers interested in leveraging AI to enhance endoscopic procedures, improve patient care, and address the unique healthcare needs of the Middle East population.
Collapse
Affiliation(s)
- Ahmed El-Sayed
- Gastroenterology Department, Chelsea & Westminster Hospital, London, UK
| | - Sara Salman
- University of Sheffield Medical School, Sheffield, UK
| | - Laith Alrubaiy
- Gastroenterology Department, Healthpoint Hospital, Abu Dhabi, United Arab Emirates
- College of Medicine and Health Sciences, Khalifa University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
16
|
Wang W, Chen S, Qiao L, Zhang S, Liu Q, Yang K, Pan Y, Liu J, Liu W. Four Markers Useful for the Distinction of Intrauterine Growth Restriction in Sheep. Animals (Basel) 2023; 13:3305. [PMID: 37958061 PMCID: PMC10648371 DOI: 10.3390/ani13213305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 10/14/2023] [Accepted: 10/23/2023] [Indexed: 11/15/2023] Open
Abstract
Intrauterine growth restriction (IUGR) is a common perinatal complication in animal reproduction, with long-lasting negative effects on neonates and postnatal animals, which seriously negatively affects livestock production. In this study, we aimed to identify potential genes associated with the diagnosis of IUGR through bioinformatics analysis. Based on the 73 differentially expressed related genes obtained by differential analysis and weighted gene co-expression network analysis, we used three machine learning algorithms to identify 4 IUGR-related hub genes (IUGR-HGs), namely, ADAM9, CRYL1, NDP52, and SERPINA7, whose ROC curves showed that they are a good diagnostic target for IUGR. Next, we identified two molecular subtypes of IUGR through consensus clustering analysis and constructed a gene scoring system based on the IUGR-HGs. The results showed that the IUGR score was positively correlated with the risk of IUGR. The AUC value of IUGR scoring accuracy was 0.970. Finally, we constructed a new artificial neural network model based on the four IUGR-HGs to diagnose sheep IUGR, and its accuracy reached 0.956. In conclusion, the IUGR-HGs we identified provide new potential molecular markers and models for the diagnosis of IUGR in sheep; they can better diagnose whether sheep have IUGR. The present findings provide new perspectives on the diagnosis of IUGR.
Collapse
Affiliation(s)
- Wannian Wang
- Department of Animal Genetics, Breeding and Reproduction, College of Animal Science, Shanxi Agricultural University, Taigu, Jinzhong 030801, China; (W.W.); (S.C.); (L.Q.); (S.Z.); (K.Y.); (Y.P.); (J.L.)
| | - Sijia Chen
- Department of Animal Genetics, Breeding and Reproduction, College of Animal Science, Shanxi Agricultural University, Taigu, Jinzhong 030801, China; (W.W.); (S.C.); (L.Q.); (S.Z.); (K.Y.); (Y.P.); (J.L.)
| | - Liying Qiao
- Department of Animal Genetics, Breeding and Reproduction, College of Animal Science, Shanxi Agricultural University, Taigu, Jinzhong 030801, China; (W.W.); (S.C.); (L.Q.); (S.Z.); (K.Y.); (Y.P.); (J.L.)
| | - Siying Zhang
- Department of Animal Genetics, Breeding and Reproduction, College of Animal Science, Shanxi Agricultural University, Taigu, Jinzhong 030801, China; (W.W.); (S.C.); (L.Q.); (S.Z.); (K.Y.); (Y.P.); (J.L.)
| | - Qiaoxia Liu
- Shanxi Animal Husbandry Technology Extension Service Center, Taiyuan 030001, China;
| | - Kaijie Yang
- Department of Animal Genetics, Breeding and Reproduction, College of Animal Science, Shanxi Agricultural University, Taigu, Jinzhong 030801, China; (W.W.); (S.C.); (L.Q.); (S.Z.); (K.Y.); (Y.P.); (J.L.)
| | - Yangyang Pan
- Department of Animal Genetics, Breeding and Reproduction, College of Animal Science, Shanxi Agricultural University, Taigu, Jinzhong 030801, China; (W.W.); (S.C.); (L.Q.); (S.Z.); (K.Y.); (Y.P.); (J.L.)
| | - Jianhua Liu
- Department of Animal Genetics, Breeding and Reproduction, College of Animal Science, Shanxi Agricultural University, Taigu, Jinzhong 030801, China; (W.W.); (S.C.); (L.Q.); (S.Z.); (K.Y.); (Y.P.); (J.L.)
| | - Wenzhong Liu
- Department of Animal Genetics, Breeding and Reproduction, College of Animal Science, Shanxi Agricultural University, Taigu, Jinzhong 030801, China; (W.W.); (S.C.); (L.Q.); (S.Z.); (K.Y.); (Y.P.); (J.L.)
- Key Laboratory of Farm Animal Genetic Resources Exploration and Breeding of Shanxi Province, Jinzhong 030801, China
| |
Collapse
|
17
|
Lu N, Guan X, Zhu J, Li Y, Zhang J. A Contrast-Enhanced CT-Based Deep Learning System for Preoperative Prediction of Colorectal Cancer Staging and RAS Mutation. Cancers (Basel) 2023; 15:4497. [PMID: 37760468 PMCID: PMC10526233 DOI: 10.3390/cancers15184497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 09/04/2023] [Accepted: 09/08/2023] [Indexed: 09/29/2023] Open
Abstract
PURPOSE This study aimed to build a deep learning system using enhanced computed tomography (CT) portal-phase images for predicting colorectal cancer patients' preoperative staging and RAS gene mutation status. METHODS The contrast-enhanced CT image dataset comprises the CT portal-phase images from a retrospective cohort of 231 colorectal cancer patients. The deep learning system was developed via migration learning for colorectal cancer detection, staging, and RAS gene mutation status prediction. This study used pre-trained Yolov7, vision transformer (VIT), swin transformer (SWT), EfficientNetV2, and ConvNeXt. 4620, and contrast-enhanced CT images and annotated tumor bounding boxes were included in the tumor identification and staging dataset. A total of 19,700 contrast-enhanced CT images comprise the RAS gene mutation status prediction dataset. RESULTS In the validation cohort, the Yolov7-based detection model detected and staged tumors with a mean accuracy precision (IoU = 0.5) (mAP_0.5) of 0.98. The area under the receiver operating characteristic curve (AUC) in the test set and validation set for the VIT-based prediction model in predicting the mutation status of the RAS genes was 0.9591 and 0.9554, respectively. The detection network and prediction network of the deep learning system demonstrated great performance in explaining contrast-enhanced CT images. CONCLUSION In this study, a deep learning system was created based on the foundation of contrast-enhanced CT portal-phase imaging to preoperatively predict the stage and RAS mutation status of colorectal cancer patients. This system will help clinicians choose the best treatment option to increase colorectal cancer patients' chances of survival and quality of life.
Collapse
Affiliation(s)
- Na Lu
- Department of General Surgery, The Second Affiliated Hospital of Nanjing Medical University, No. 121, Jiangjiayuan Road, Nanjing 210011, China (X.G.)
| | - Xiao Guan
- Department of General Surgery, The Second Affiliated Hospital of Nanjing Medical University, No. 121, Jiangjiayuan Road, Nanjing 210011, China (X.G.)
| | - Jianguo Zhu
- Department of Radiology, The Second Affiliated Hospital of Nanjing Medical University, Nanjing 210011, China;
| | - Yuan Li
- Key Laboratory of Modern Toxicology, Ministry of Education, School of Public Health, Nanjing Medical University, Nanjing 211166, China;
| | - Jianping Zhang
- Department of General Surgery, The Second Affiliated Hospital of Nanjing Medical University, No. 121, Jiangjiayuan Road, Nanjing 210011, China (X.G.)
| |
Collapse
|
18
|
Arif AA, Jiang SX, Byrne MF. Artificial intelligence in endoscopy: Overview, applications, and future directions. Saudi J Gastroenterol 2023; 29:269-277. [PMID: 37787347 PMCID: PMC10644999 DOI: 10.4103/sjg.sjg_286_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 08/16/2023] [Indexed: 09/15/2023] Open
Abstract
Since the emergence of artificial intelligence (AI) in medicine, endoscopy applications in gastroenterology have been at the forefront of innovations. The ever-increasing number of studies necessitates the need to organize and classify applications in a useful way. Separating AI capabilities by computer aided detection (CADe), diagnosis (CADx), and quality assessment (CADq) allows for a systematic evaluation of each application. CADe studies have shown promise in accurate detection of esophageal, gastric and colonic neoplasia as well as identifying sources of bleeding and Crohn's disease in the small bowel. While more advanced CADx applications employ optical biopsies to give further information to characterize neoplasia and grade inflammatory disease, diverse CADq applications ensure quality and increase the efficiency of procedures. Future applications show promise in advanced therapeutic modalities and integrated systems that provide multimodal capabilities. AI is set to revolutionize clinical decision making and performance of endoscopy.
Collapse
Affiliation(s)
- Arif A. Arif
- Department of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Shirley X. Jiang
- Department of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Michael F. Byrne
- Division of Gastroenterology, Department of Medicine, University of British Columbia, Vancouver, BC, Canada
- Satisfai Health, Vancouver, BC, Canada
| |
Collapse
|
19
|
Lin CH, Hsu PI, Tseng CD, Chao PJ, Wu IT, Ghose S, Shih CA, Lee SH, Ren JH, Shie CB, Lee TF. Application of artificial intelligence in endoscopic image analysis for the diagnosis of a gastric cancer pathogen-Helicobacter pylori infection. Sci Rep 2023; 13:13380. [PMID: 37592004 PMCID: PMC10435453 DOI: 10.1038/s41598-023-40179-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 08/06/2023] [Indexed: 08/19/2023] Open
Abstract
Helicobacter pylori (H. pylori) infection is the principal cause of chronic gastritis, gastric ulcers, duodenal ulcers, and gastric cancer. In clinical practice, diagnosis of H. pylori infection by a gastroenterologists' impression of endoscopic images is inaccurate and cannot be used for the management of gastrointestinal diseases. The aim of this study was to develop an artificial intelligence classification system for the diagnosis of H. pylori infection by pre-processing endoscopic images and machine learning methods. Endoscopic images of the gastric body and antrum from 302 patients receiving endoscopy with confirmation of H. pylori status by a rapid urease test at An Nan Hospital were obtained for the derivation and validation of an artificial intelligence classification system. The H. pylori status was interpreted as positive or negative by Convolutional Neural Network (CNN) and Concurrent Spatial and Channel Squeeze and Excitation (scSE) network, combined with different classification models for deep learning of gastric images. The comprehensive assessment for H. pylori status by scSE-CatBoost classification models for both body and antrum images from same patients achieved an accuracy of 0.90, sensitivity of 1.00, specificity of 0.81, positive predictive value of 0.82, negative predicted value of 1.00, and area under the curve of 0.88. The data suggest that an artificial intelligence classification model using scSE-CatBoost deep learning for gastric endoscopic images can distinguish H. pylori status with good performance and is useful for the survey or diagnosis of H. pylori infection in clinical practice.
Collapse
Affiliation(s)
- Chih-Hsueh Lin
- Department of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung, 80778, Taiwan
- Medical Physics and Informatics Laboratory of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung, 80778, Taiwan
| | - Ping-I Hsu
- Division of Gastroenterology, Department of Medicine, An Nan Hospital, China Medical University, Tainan, Taiwan
| | - Chin-Dar Tseng
- Department of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung, 80778, Taiwan.
- Medical Physics and Informatics Laboratory of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung, 80778, Taiwan.
| | - Pei-Ju Chao
- Department of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung, 80778, Taiwan
- Medical Physics and Informatics Laboratory of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung, 80778, Taiwan
| | - I-Ting Wu
- Division of Gastroenterology, Department of Medicine, An Nan Hospital, China Medical University, Tainan, Taiwan
| | - Supratip Ghose
- Department of Education and Research, An Nan Hospital, China Medical University, Tainan, Taiwan
| | - Chih-An Shih
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Antai Medical Care Corporation, Antai Tian-Sheng Memorial Hospital, Donggan, Pingtung County, Taiwan
- Department of Nursing, Meiho University, Neipu, Pingtung County, Taiwan
| | - Shen-Hao Lee
- Department of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung, 80778, Taiwan
- Medical Physics and Informatics Laboratory of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung, 80778, Taiwan
- Department of Radiation Oncology, Linkou Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Linkou, Taiwan
| | - Jia-Hong Ren
- Department of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung, 80778, Taiwan
- Medical Physics and Informatics Laboratory of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung, 80778, Taiwan
| | - Chang-Bih Shie
- Division of Gastroenterology, Department of Medicine, An Nan Hospital, China Medical University, Tainan, Taiwan
| | - Tsair-Fwu Lee
- Department of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung, 80778, Taiwan.
- Medical Physics and Informatics Laboratory of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung, 80778, Taiwan.
- Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung, 80708, Taiwan.
- PhD Program in Biomedical Engineering, Kaohsiung Medical University, Kaohsiung, 80708, Taiwan.
- School of Dentistry, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung, 80708, Taiwan.
| |
Collapse
|
20
|
Shen T, Wang H, Hu R, Lv Y. Developing neural network diagnostic models and potential drugs based on novel identified immune-related biomarkers for celiac disease. Hum Genomics 2023; 17:76. [PMID: 37587523 PMCID: PMC10433645 DOI: 10.1186/s40246-023-00526-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 08/11/2023] [Indexed: 08/18/2023] Open
Abstract
BACKGROUND As one of the most common intestinal inflammatory diseases, celiac disease (CD) is typically characterized by an autoimmune disorder resulting from ingesting gluten proteins. Although the incidence and prevalence of CD have increased over time, the diagnostic methods and treatment options are still limited. Therefore, it is urgent to investigate the potential biomarkers and targeted drugs for CD. METHODS Gene expression data was downloaded from GEO datasets. Differential gene expression analysis was performed to identify the dysregulated immune-related genes. Multiple machine algorithms, including randomForest, SVM-RFE, and LASSO, were used to select the hub immune-related genes (HIGs). The immune-related genes score (IG score) and artificial neural network (ANN) were constructed based on HIGs. Potential drugs targeting HIGs were identified by using the Enrichr platform and molecular docking method. RESULTS We identified the dysregulated immune-related genes at a genome-wide level and demonstrated their roles in CD-related immune pathways. The hub genes (MR1, CCL25, and TNFSF13B) were further screened by integrating several machine algorithms. Meanwhile, the CD patients were divided into distinct subtypes with either high- or low-immunoactivity using single-sample gene set enrichment analysis (ssGSEA) and consensus clustering. By constructing IG score based on HIGs, we found that patients with high IG score were mainly attributed to high-immunoactivity subgroups, which suggested a strong link between HIGs and immunoactivity of CD patients. In addition, the novel constructed ANN model showed the sound diagnostic ability of HIGs. Mechanistically, we validated that the HIGs play pivotal roles in regulating CD's immune and inflammatory state. Through targeting the HIGs, we also found potential drugs for anti-CD treatment by using the Enrichr platform and molecular docking method. CONCLUSIONS This study unveils the HIGs and elucidates the networks regulated by these genes in the context of CD. It underscores the pivotal significance of HIGs in accurately predicting the presence or absence of CD in patients. Consequently, this research offers promising prospects for the development of diagnostic biomarkers and therapeutic targets for CD.
Collapse
Affiliation(s)
- Tao Shen
- Anhui Provincial Key Laboratory of Molecular Enzymology and Mechanism of Major Diseases, Key Laboratory of Biomedicine in Gene Diseases, Health of Anhui Higher Education Institutes, College of Life Sciences, Anhui Normal University, Wuhu, China.
| | - Haiyang Wang
- Anhui Provincial Key Laboratory of Molecular Enzymology and Mechanism of Major Diseases, Key Laboratory of Biomedicine in Gene Diseases, Health of Anhui Higher Education Institutes, College of Life Sciences, Anhui Normal University, Wuhu, China
| | - Rongkang Hu
- Anhui Provincial Key Laboratory of Molecular Enzymology and Mechanism of Major Diseases, Key Laboratory of Biomedicine in Gene Diseases, Health of Anhui Higher Education Institutes, College of Life Sciences, Anhui Normal University, Wuhu, China
| | - Yanni Lv
- Anhui Provincial Key Laboratory of Molecular Enzymology and Mechanism of Major Diseases, Key Laboratory of Biomedicine in Gene Diseases, Health of Anhui Higher Education Institutes, College of Life Sciences, Anhui Normal University, Wuhu, China
| |
Collapse
|
21
|
Choi S, Kim S. Artificial Intelligence in the Pathology of Gastric Cancer. J Gastric Cancer 2023; 23:410-427. [PMID: 37553129 PMCID: PMC10412971 DOI: 10.5230/jgc.2023.23.e25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/09/2023] [Accepted: 07/14/2023] [Indexed: 08/10/2023] Open
Abstract
Recent advances in artificial intelligence (AI) have provided novel tools for rapid and precise pathologic diagnosis. The introduction of digital pathology has enabled the acquisition of scanned slide images that are essential for the application of AI. The application of AI for improved pathologic diagnosis includes the error-free detection of potentially negligible lesions, such as a minute focus of metastatic tumor cells in lymph nodes, the accurate diagnosis of potentially controversial histologic findings, such as very well-differentiated carcinomas mimicking normal epithelial tissues, and the pathological subtyping of the cancers. Additionally, the utilization of AI algorithms enables the precise decision of the score of immunohistochemical markers for targeted therapies, such as human epidermal growth factor receptor 2 and programmed death-ligand 1. Studies have revealed that AI assistance can reduce the discordance of interpretation between pathologists and more accurately predict clinical outcomes. Several approaches have been employed to develop novel biomarkers from histologic images using AI. Moreover, AI-assisted analysis of the cancer microenvironment showed that the distribution of tumor-infiltrating lymphocytes was related to the response to the immune checkpoint inhibitor therapy, emphasizing its value as a biomarker. As numerous studies have demonstrated the significance of AI-assisted interpretation and biomarker development, the AI-based approach will advance diagnostic pathology.
Collapse
Affiliation(s)
- Sangjoon Choi
- Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Seokhwi Kim
- Department of Pathology, Ajou University School of Medicine, Suwon, Korea
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Korea.
| |
Collapse
|
22
|
Kellerman R, Bleiweiss A, Samuel S, Margalit-Yehuda R, Aflalo E, Barzilay O, Ben-Horin S, Eliakim R, Zimlichman E, Soffer S, Klang E, Kopylov U. Spatiotemporal analysis of small bowel capsule endoscopy videos for outcomes prediction in Crohn's disease. Therap Adv Gastroenterol 2023; 16:17562848231172556. [PMID: 37440929 PMCID: PMC10333642 DOI: 10.1177/17562848231172556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 04/12/2023] [Indexed: 07/15/2023] Open
Abstract
Background Deep learning techniques can accurately detect and grade inflammatory findings on images from capsule endoscopy (CE) in Crohn's disease (CD). However, the predictive utility of deep learning of CE in CD for disease outcomes has not been examined. Objectives We aimed to develop a deep learning model that can predict the need for biological therapy based on complete CE videos of newly-diagnosed CD patients. Design This was a retrospective cohort study. The study cohort included treatment-naïve CD patients that have performed CE (SB3, Medtronic) within 6 months of diagnosis. Complete small bowel videos were extracted using the RAPID Reader software. Methods CE videos were scored using the Lewis score (LS). Clinical, endoscopic, and laboratory data were extracted from electronic medical records. Machine learning analysis was performed using the TimeSformer computer vision algorithm developed to capture spatiotemporal characteristics for video analysis. Results The patient cohort included 101 patients. The median duration of follow-up was 902 (354-1626) days. Biological therapy was initiated by 37 (36.6%) out of 101 patients. TimeSformer algorithm achieved training and testing accuracy of 82% and 81%, respectively, with an Area under the ROC Curve (AUC) of 0.86 to predict the need for biological therapy. In comparison, the AUC for LS was 0.70 and for fecal calprotectin 0.74. Conclusion Spatiotemporal analysis of complete CE videos of newly-diagnosed CD patients achieved accurate prediction of the need for biological therapy. The accuracy was superior to that of the human reader index or fecal calprotectin. Following future validation studies, this approach will allow for fast and accurate personalization of treatment decisions in CD.
Collapse
Affiliation(s)
| | | | | | - Reuma Margalit-Yehuda
- Department of Gastroenterology, Sheba Medical
Center, Tel Hashomer, Israel and Tel Aviv University, Tel Aviv, Israel
| | | | - Oranit Barzilay
- Department of Internal Medicine F, Sheba
Medical Center, Tel Hashomer, Israel and Tel Aviv University, Tel Aviv,
Israel
| | - Shomron Ben-Horin
- Department of Gastroenterology, Sheba Medical
Center, Tel Hashomer, Israel and Tel Aviv University, Tel Aviv, Israel
| | - Rami Eliakim
- Department of Gastroenterology, Sheba Medical
Center, Tel Hashomer, Israel and Tel Aviv University, Tel Aviv, Israel
| | - Eyal Zimlichman
- Sheba ARC and Hospital Management, Sheba
Medical Center, Tel Hashomer, Israel and Tel Aviv University, Tel Aviv,
Israel
| | - Shelly Soffer
- Department of Internal Medicine B, Assuta
Medical Center, 7747629, Ashdod, Israel
- Ben-Gurion University of the Negev, Be’er
Sheva, Israel
| | - Eyal Klang
- Sheba ARC, Sheba Medical Center, Tel Hashomer,
Israel and Tel Aviv University, Tel Aviv, Israel
| | - Uri Kopylov
- Department of Gastroenterology, Sheba Medical
Center, Tel Hashomer, Israel and Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
23
|
A deep-learning based system using multi-modal data for diagnosing gastric neoplasms in real-time (with video). Gastric Cancer 2023; 26:275-285. [PMID: 36520317 DOI: 10.1007/s10120-022-01358-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/25/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND White light (WL) and weak-magnifying (WM) endoscopy are both important methods for diagnosing gastric neoplasms. This study constructed a deep-learning system named ENDOANGEL-MM (multi-modal) aimed at real-time diagnosing gastric neoplasms using WL and WM data. METHODS WL and WM images of a same lesion were combined into image-pairs. A total of 4201 images, 7436 image-pairs, and 162 videos were used for model construction and validation. Models 1-5 including two single-modal models (WL, WM) and three multi-modal models (data fusion on task-level, feature-level, and input-level) were constructed. The models were tested on three levels including images, videos, and prospective patients. The best model was selected for constructing ENDOANGEL-MM. We compared the performance between the models and endoscopists and conducted a diagnostic study to explore the ENDOANGEL-MM's assistance ability. RESULTS Model 4 (ENDOANGEL-MM) showed the best performance among five models. Model 2 performed better in single-modal models. The accuracy of ENDOANGEL-MM was higher than that of Model 2 in still images, real-time videos, and prospective patients. (86.54 vs 78.85%, P = 0.134; 90.00 vs 85.00%, P = 0.179; 93.55 vs 70.97%, P < 0.001). Model 2 and ENDOANGEL-MM outperformed endoscopists on WM data (85.00 vs 71.67%, P = 0.002) and multi-modal data (90.00 vs 76.17%, P = 0.002), significantly. With the assistance of ENDOANGEL-MM, the accuracy of non-experts improved significantly (85.75 vs 70.75%, P = 0.020), and performed no significant difference from experts (85.75 vs 89.00%, P = 0.159). CONCLUSIONS The multi-modal model constructed by feature-level fusion showed the best performance. ENDOANGEL-MM identified gastric neoplasms with good accuracy and has a potential role in real-clinic.
Collapse
|
24
|
Abe S. Computer-aided endoscopic diagnosis of early gastric cancer on white light endoscopy: No detection, no characterization. Dig Endosc 2023; 35:492-493. [PMID: 36808148 DOI: 10.1111/den.14523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 01/23/2023] [Indexed: 02/23/2023]
Affiliation(s)
- Seiichiro Abe
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| |
Collapse
|
25
|
Katta MR, Kalluru PKR, Bavishi DA, Hameed M, Valisekka SS. Artificial intelligence in pancreatic cancer: diagnosis, limitations, and the future prospects-a narrative review. J Cancer Res Clin Oncol 2023:10.1007/s00432-023-04625-1. [PMID: 36739356 DOI: 10.1007/s00432-023-04625-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 01/27/2023] [Indexed: 02/06/2023]
Abstract
PURPOSE This review aims to explore the role of AI in the application of pancreatic cancer management and make recommendations to minimize the impact of the limitations to provide further benefits from AI use in the future. METHODS A comprehensive review of the literature was conducted using a combination of MeSH keywords, including "Artificial intelligence", "Pancreatic cancer", "Diagnosis", and "Limitations". RESULTS The beneficial implications of AI in the detection of biomarkers, diagnosis, and prognosis of pancreatic cancer have been explored. In addition, current drawbacks of AI use have been divided into subcategories encompassing statistical, training, and knowledge limitations; data handling, ethical and medicolegal aspects; and clinical integration and implementation. CONCLUSION Artificial intelligence (AI) refers to computational machine systems that accomplish a set of given tasks by imitating human intelligence in an exponential learning pattern. AI in gastrointestinal oncology has continued to provide significant advancements in the clinical, molecular, and radiological diagnosis and intervention techniques required to improve the prognosis of many gastrointestinal cancer types, particularly pancreatic cancer.
Collapse
Affiliation(s)
| | | | | | - Maha Hameed
- Clinical Research Department, King Faisal Specialist Hospital and Research Centre, Riyadh, Saudi Arabia.
| | | |
Collapse
|
26
|
Zhou B, Rao X, Xing H, Ma Y, Wang F, Rong L. A convolutional neural network-based system for detecting early gastric cancer in white-light endoscopy. Scand J Gastroenterol 2023; 58:157-162. [PMID: 36000979 DOI: 10.1080/00365521.2022.2113427] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
BACKGROUND White-light endoscopy (WLE) is a main and standard modality for detection of early gastric cancer (EGC). The detection rate of EGC is not satisfactory so far. In this single-center retrospective study we developed a convolutional neural network (CNN)-based system to automatically detect EGC in WLE images. METHODS An EGC detecting system was constructed based on the CNN architecture EfficientDet. We trained our system with a data set including 4527 images from 130 cases (cancerous images, 1737; noncancerous images, 2790). Then we tested its performance with a data set including 1243 images from 64 cases (cancerous images, 445; noncancerous images, 798). RESULTS For case-based analysis, our system successfully detected EGC in 63 of 64 cases and the sensitivity was 98.4%. For image-based analysis, the accuracy was 88.3%. The sensitivity, specificity, positive predictive value and negative predictive value were 84.5%, 90.5%, 83.2% and 91.3%, respectively. The most common cause for false positives was gastritis (57.9%). The most common cause for false negatives was that the lesion was too small with a diameter of 10 mm or less (44.9%). CONCLUSION Our CNN-based EGC detecting system was able to achieve satisfactory sensitivity for detecting EGC in WLE images and shows great potential in assisting endoscopists with the detection of EGC.
Collapse
Affiliation(s)
- Bin Zhou
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| | - Xiaolong Rao
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| | - Haoqiang Xing
- Thunder Software Technology Co., Ltd, Beijing, China
| | - Yongchen Ma
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| | - Feng Wang
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| | - Long Rong
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| |
Collapse
|
27
|
Liu L, Dong Z, Cheng J, Bu X, Qiu K, Yang C, Wang J, Niu W, Wu X, Xu J, Mao T, Lu L, Wan X, Zhou H. Diagnosis and segmentation effect of the ME-NBI-based deep learning model on gastric neoplasms in patients with suspected superficial lesions - a multicenter study. Front Oncol 2023; 12:1075578. [PMID: 36727062 PMCID: PMC9885211 DOI: 10.3389/fonc.2022.1075578] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 12/29/2022] [Indexed: 01/17/2023] Open
Abstract
Background Endoscopically visible gastric neoplastic lesions (GNLs), including early gastric cancer and intraepithelial neoplasia, should be accurately diagnosed and promptly treated. However, a high rate of missed diagnosis of GNLs contributes to the potential risk of the progression of gastric cancer. The aim of this study was to develop a deep learning-based computer-aided diagnosis (CAD) system for the diagnosis and segmentation of GNLs under magnifying endoscopy with narrow-band imaging (ME-NBI) in patients with suspected superficial lesions. Methods ME-NBI images of patients with GNLs in two centers were retrospectively analysed. Two convolutional neural network (CNN) modules were developed and trained on these images. CNN1 was trained to diagnose GNLs, and CNN2 was trained for segmentation. An additional internal test set and an external test set from another center were used to evaluate the diagnosis and segmentation performance. Results CNN1 showed a diagnostic performance with an accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of 90.8%, 92.5%, 89.0%, 89.4% and 92.2%, respectively, and an area under the curve (AUC) of 0.928 in the internal test set. With CNN1 assistance, all endoscopists had a higher accuracy than for an independent diagnosis. The average intersection over union (IOU) between CNN2 and the ground truth was 0.5837, with a precision, recall and the Dice coefficient of 0.776, 0.983 and 0.867, respectively. Conclusions This CAD system can be used as an auxiliary tool to diagnose and segment GNLs, assisting endoscopists in more accurately diagnosing GNLs and delineating their extent to improve the positive rate of lesion biopsy and ensure the integrity of endoscopic resection.
Collapse
Affiliation(s)
- Leheng Liu
- Department of Gastroenterology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,Shanghai Key Laboratory of Pancreatic Diseases, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhixia Dong
- Department of Gastroenterology, Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Shanghai, China
| | - Jinnian Cheng
- Department of Gastroenterology, Shanghai Tong Ren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiongzhu Bu
- School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Kaili Qiu
- School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Chuan Yang
- School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Jing Wang
- Department of Pathology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wenlu Niu
- Department of Gastroenterology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaowan Wu
- Department of Gastroenterology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jingxian Xu
- Department of Gastroenterology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,Shanghai Key Laboratory of Pancreatic Diseases, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Tiancheng Mao
- Department of Gastroenterology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,Shanghai Key Laboratory of Pancreatic Diseases, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Lungen Lu
- Department of Gastroenterology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,Shanghai Key Laboratory of Pancreatic Diseases, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xinjian Wan
- Department of Gastroenterology, Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Shanghai, China,*Correspondence: Hui Zhou, ; Xinjian Wan,
| | - Hui Zhou
- Department of Gastroenterology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,Shanghai Key Laboratory of Pancreatic Diseases, Shanghai Jiao Tong University School of Medicine, Shanghai, China,*Correspondence: Hui Zhou, ; Xinjian Wan,
| |
Collapse
|
28
|
Xu F, Zhu C, Wang Z, Zhang L, Gao H, Ma Z, Gao Y, Guo Y, Li X, Luo Y, Li M, Shen G, Liu H, Li Y, Zhang C, Cui J, Li J, Jiang H, Liu J. Deep learning for real-time detection of breast cancer presenting pathological nipple discharge by ductoscopy. Front Oncol 2023; 13:1103145. [PMID: 37035165 PMCID: PMC10073663 DOI: 10.3389/fonc.2023.1103145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Accepted: 02/22/2023] [Indexed: 04/11/2023] Open
Abstract
Objective As a common breast cancer-related complaint, pathological nipple discharge (PND) detected by ductoscopy is often missed diagnosed. Deep learning techniques have enabled great advances in clinical imaging but are rarely applied in breast cancer with PND. This study aimed to design and validate an Intelligent Ductoscopy for Breast Cancer Diagnostic System (IDBCS) for breast cancer diagnosis by analyzing real-time imaging data acquired by ductoscopy. Materials and methods The present multicenter, case-control trial was carried out in 6 hospitals in China. Images for consecutive patients, aged ≥18 years, with no previous ductoscopy, were obtained from the involved hospitals. All individuals with PND confirmed from breast lesions by ductoscopy were eligible. Images from Beijing Chao-Yang Hospital were randomly assigned (8:2) to the training (IDBCS development) and internal validation (performance evaluation of the IDBCS) datasets. Diagnostic performance was further assessed with internal and prospective validation datasets from Beijing Chao-Yang Hospital; further external validation was carried out with datasets from 5 primary care hospitals. Diagnostic accuracies, sensitivities, specificities, and positive and negative predictive values for IDBCS and endoscopists (expert, competent, or trainee) in the detection of malignant lesions were obtained by the Clopper-Pearson method. Results Totally 11305 ductoscopy images in 1072 patients were utilized for developing and testing the IDBCS. Area under the curves (AUCs) in breast cancer detection were 0·975 (95%CI 0·899-0·998) and 0·954 (95%CI 0·925-0·975) in the internal validation and prospective datasets, respectively, and ranged between 0·922 (95%CI 0·866-0·960) and 0·965 (95%CI 0·892-0·994) in the 5 external validation datasets. The IDBCS had superior diagnostic accuracy compared with expert (0.912 [95%CI 0.839-0.959] vs 0.726 [0.672-0.775]; p<0.001), competent (0.699 [95%CI 0.645-0.750], p<0.001), and trainee (0.703 [95%CI 0.648-0.753], p<0.001) endoscopists. Conclusions IDBCS outperforms clinical oncologists, achieving high accuracy in diagnosing breast cancer with PND. The novel system could help endoscopists improve their diagnostic efficacy in breast cancer diagnosis.
Collapse
Affiliation(s)
- Feng Xu
- Department of Breast Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
- *Correspondence: Feng Xu, ; Chuang Zhu, ; Hongchuan Jiang, ; Jun Liu,
| | - Chuang Zhu
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
- *Correspondence: Feng Xu, ; Chuang Zhu, ; Hongchuan Jiang, ; Jun Liu,
| | - Zhihao Wang
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Lei Zhang
- Department of Breast Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Haifeng Gao
- Breast Disease Prevention and Treatment Center, Haidian Maternal and Child Health Hospital, Beijing, China
| | - Zhenhai Ma
- Department of General Surgery , Beijing Huairou Hospital, Beijing, China
| | - Yue Gao
- Department of General Surgery , Beijing Huairou Hospital, Beijing, China
| | - Yang Guo
- Department of Breast Surgery, Beijing Yanqing District Maternal and Child Health Care Hospital, Beijing, China
| | - Xuewen Li
- Department of General Surgery, Beijing Pinggu Hospital, Beijing, China
| | - Yunzhao Luo
- Department of Breast Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Mengxin Li
- Department of Breast Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Guangqian Shen
- Department of Breast Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - He Liu
- Department of Breast Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Yanshuang Li
- Department of Breast Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Chao Zhang
- Department of Breast Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Jianxiu Cui
- Department of Breast Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Jie Li
- Department of Breast Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Hongchuan Jiang
- Department of Breast Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
- *Correspondence: Feng Xu, ; Chuang Zhu, ; Hongchuan Jiang, ; Jun Liu,
| | - Jun Liu
- Department of Breast Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
- *Correspondence: Feng Xu, ; Chuang Zhu, ; Hongchuan Jiang, ; Jun Liu,
| |
Collapse
|
29
|
Shi Y, Wei N, Wang K, Wu J, Tao T, Li N, Lv B. Deep learning-assisted diagnosis of chronic atrophic gastritis in endoscopy. Front Oncol 2023; 13:1122247. [PMID: 36950553 PMCID: PMC10025314 DOI: 10.3389/fonc.2023.1122247] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 02/17/2023] [Indexed: 03/08/2023] Open
Abstract
Background Chronic atrophic gastritis (CAG) is a precancerous condition. It is not easy to detect CAG in endoscopy. Improving the detection rate of CAG under endoscopy is essential to reduce or interrupt the occurrence of gastric cancer. This study aimed to construct a deep learning (DL) model for CAG recognition based on endoscopic images to improve the CAG detection rate during endoscopy. Methods We collected 10,961 endoscopic images and 118 video clips from 4,050 patients. For model training and testing, we divided them into two groups based on the pathological results: CAG and chronic non-atrophic gastritis (CNAG). We compared the performance of four state-of-the-art (SOTA) DL networks for CAG recognition and selected one of them for further improvement. The improved network was called GAM-EfficientNet. Finally, we compared GAM-EfficientNet with three endoscopists and analyzed the decision basis of the network in the form of heatmaps. Results After fine-tuning and transfer learning, the sensitivity, specificity, and accuracy of GAM-EfficientNet reached 93%, 94%, and 93.5% in the external test set and 96.23%, 89.23%, and 92.37% in the video test set, respectively, which were higher than those of the three endoscopists. Conclusions The CAG recognition model based on deep learning has high sensitivity and accuracy, and its performance is higher than that of endoscopists.
Collapse
Affiliation(s)
- Yanting Shi
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Ning Wei
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Kunhong Wang
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Jingjing Wu
- Department of Internal Medicine, Zhangdian Maternal and Child Health Care Hospital, Zibo, Shandong, China
| | - Tao Tao
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Na Li
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
- *Correspondence: Bing Lv, ; Na Li,
| | - Bing Lv
- School of Computer Science and Technology, Shandong University of Technology, Zibo, Shandong, China
- *Correspondence: Bing Lv, ; Na Li,
| |
Collapse
|
30
|
Galati JS, Duve RJ, O'Mara M, Gross SA. Artificial intelligence in gastroenterology: A narrative review. Artif Intell Gastroenterol 2022; 3:117-141. [DOI: 10.35712/aig.v3.i5.117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/21/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Artificial intelligence (AI) is a complex concept, broadly defined in medicine as the development of computer systems to perform tasks that require human intelligence. It has the capacity to revolutionize medicine by increasing efficiency, expediting data and image analysis and identifying patterns, trends and associations in large datasets. Within gastroenterology, recent research efforts have focused on using AI in esophagogastroduodenoscopy, wireless capsule endoscopy (WCE) and colonoscopy to assist in diagnosis, disease monitoring, lesion detection and therapeutic intervention. The main objective of this narrative review is to provide a comprehensive overview of the research being performed within gastroenterology on AI in esophagogastroduodenoscopy, WCE and colonoscopy.
Collapse
Affiliation(s)
- Jonathan S Galati
- Department of Medicine, NYU Langone Health, New York, NY 10016, United States
| | - Robert J Duve
- Department of Internal Medicine, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY 14203, United States
| | - Matthew O'Mara
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| | - Seth A Gross
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| |
Collapse
|
31
|
Zhu X, Ma Y, Guo D, Men J, Xue C, Cao X, Zhang Z. A Framework to Predict Gastric Cancer Based on Tongue Features and Deep Learning. MICROMACHINES 2022; 14:53. [PMID: 36677112 PMCID: PMC9865689 DOI: 10.3390/mi14010053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/05/2022] [Accepted: 12/18/2022] [Indexed: 06/17/2023]
Abstract
Gastric cancer has become a global health issue, severely disrupting daily life. Early detection in gastric cancer patients and immediate treatment contribute significantly to the protection of human health. However, routine gastric cancer examinations carry the risk of complications and are time-consuming. We proposed a framework to predict gastric cancer non-invasively and conveniently. A total of 703 tongue images were acquired using a bespoke tongue image capture instrument, then a dataset containing subjects with and without gastric cancer was created. As the images acquired by this instrument contain non-tongue areas, the Deeplabv3+ network was applied for tongue segmentation to reduce the interference in feature extraction. Nine tongue features were extracted, relationships between tongue features and gastric cancer were explored by using statistical methods and deep learning, finally a prediction framework for gastric cancer was designed. The experimental results showed that the proposed framework had a strong detection ability, with an accuracy of 93.6%. The gastric cancer prediction framework created by combining statistical methods and deep learning proposes a scheme for exploring the relationships between gastric cancer and tongue features. This framework contributes to the effective early diagnosis of patients with gastric cancer.
Collapse
Affiliation(s)
- Xiaolong Zhu
- Key Laboratory of Instrumentation Science & Dynamic Measurement, School of Instrument and Electronics, North University of China, Taiyuan 030051, China
| | - Yuhang Ma
- Key Laboratory of Instrumentation Science & Dynamic Measurement, School of Instrument and Electronics, North University of China, Taiyuan 030051, China
| | - Dong Guo
- Shanxi University of Chinese Medicine, Taiyuan 030051, China
| | - Jiuzhang Men
- Shanxi University of Chinese Medicine, Taiyuan 030051, China
| | - Chenyang Xue
- Key Laboratory of Instrumentation Science & Dynamic Measurement, School of Instrument and Electronics, North University of China, Taiyuan 030051, China
| | - Xiyuan Cao
- Key Laboratory of Instrumentation Science & Dynamic Measurement, School of Instrument and Electronics, North University of China, Taiyuan 030051, China
| | - Zhidong Zhang
- Key Laboratory of Instrumentation Science & Dynamic Measurement, School of Instrument and Electronics, North University of China, Taiyuan 030051, China
| |
Collapse
|
32
|
Ikenoyama Y, Tanaka K, Umeda Y, Hamada Y, Yukimoto H, Yamada R, Tsuboi J, Nakamura M, Katsurahara M, Horiki N, Nakagawa H. Effect of adding acetic acid when performing magnifying endoscopy with narrow band imaging for diagnosis of Barrett's esophageal adenocarcinoma. Endosc Int Open 2022; 10:E1528-E1536. [PMID: 36531673 PMCID: PMC9754883 DOI: 10.1055/a-1948-2910] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 09/20/2022] [Indexed: 10/14/2022] Open
Abstract
Background and study aims Magnifying endoscopy with narrow band imaging (M-NBI) was developed to diagnose Barrett's esophageal adenocarcinoma (BEA); however, this method remains challenging for inexperienced endoscopists. We aimed to evaluate a modified M-NBI technique that included spraying acetic acid (M-AANBI). Patients and methods Eight endoscopists retrospectively examined 456 endoscopic images obtained from 28 patients with 29 endoscopically resected BEA lesions using three validation schemes: Validation 1 (260 images), wherein the diagnostic performances of M-NBI and M-AANBI were compared - the dataset included 65 images each of BEA and non-neoplastic Barrett's esophagus (NNBE) obtained using each modality; validation 2 (112 images), wherein 56 pairs of M-NBI and M-AANBI images were prepared from the same BEA and NNBE lesions, and diagnoses derived using M-NBI alone were compared to those obtained using both M-NBI and M-AANBI; and validation 3 (84 images), wherein the ease of identifying the BEA demarcation line (DL) was scored via a visual analog scale in 28 patients using magnifying endoscopy with white-light imaging (M-WLI), M-NBI, and M-AANBI. Results For validation 1, M-AANBI was superior to M-NBI in terms of sensitivity (90.8 % vs. 64.6 %), specificity (98.5 % vs. 76.9 %), and accuracy (94.6 % vs. 70.4 %) (all P < 0.05). For validation 2, the accuracy of M-NBI alone was significantly improved when combined with M-AANBI (from 70.5 % to 89.3 %; P < 0.05). For validation 3, M-AANBI had the highest mean score for ease of DL recognition (8.75) compared to M-WLI (3.63) and M-NBI (6.25) (all P < 0.001). Conclusions Using M-AANBI might improve the accuracy of BEA diagnosis.
Collapse
Affiliation(s)
- Yohei Ikenoyama
- Department of Gastroenterology and hepatology, Mie University Graduate School of Medicine, Tsu, Japan,Department of Endoscopy, Mie University Hospital, Tsu, Japan
| | - Kyosuke Tanaka
- Department of Gastroenterology and hepatology, Mie University Graduate School of Medicine, Tsu, Japan,Department of Endoscopy, Mie University Hospital, Tsu, Japan
| | - Yuhei Umeda
- Department of Gastroenterology and hepatology, Mie University Graduate School of Medicine, Tsu, Japan,Department of Endoscopy, Mie University Hospital, Tsu, Japan
| | - Yasuhiko Hamada
- Department of Gastroenterology and hepatology, Mie University Graduate School of Medicine, Tsu, Japan
| | - Hiroki Yukimoto
- Department of Gastroenterology and hepatology, Mie University Graduate School of Medicine, Tsu, Japan
| | - Reiko Yamada
- Department of Gastroenterology and hepatology, Mie University Graduate School of Medicine, Tsu, Japan
| | - Junya Tsuboi
- Department of Gastroenterology and hepatology, Mie University Graduate School of Medicine, Tsu, Japan
| | - Misaki Nakamura
- Department of Endoscopy, Mie University Hospital, Tsu, Japan
| | | | - Noriyuki Horiki
- Department of Gastroenterology and hepatology, Mie University Graduate School of Medicine, Tsu, Japan
| | - Hayato Nakagawa
- Department of Gastroenterology and hepatology, Mie University Graduate School of Medicine, Tsu, Japan,Department of Endoscopy, Mie University Hospital, Tsu, Japan
| |
Collapse
|
33
|
Ochiai K, Ozawa T, Shibata J, Ishihara S, Tada T. Current Status of Artificial Intelligence-Based Computer-Assisted Diagnosis Systems for Gastric Cancer in Endoscopy. Diagnostics (Basel) 2022; 12:diagnostics12123153. [PMID: 36553160 PMCID: PMC9777622 DOI: 10.3390/diagnostics12123153] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/07/2022] [Accepted: 12/10/2022] [Indexed: 12/15/2022] Open
Abstract
Artificial intelligence (AI) is gradually being utilized in various fields as its performance has been improving with the development of deep learning methods, availability of big data, and the progression of computer processing units. In the field of medicine, AI is mainly implemented in image recognition, such as in radiographic and pathologic diagnoses. In the realm of gastrointestinal endoscopy, although AI-based computer-assisted detection/diagnosis (CAD) systems have been applied in some areas, such as colorectal polyp detection and diagnosis, so far, their implementation in real-world clinical settings is limited. The accurate detection or diagnosis of gastric cancer (GC) is one of the challenges in which performance varies greatly depending on the endoscopist's skill. The diagnosis of early GC is especially challenging, partly because early GC mimics atrophic gastritis in the background mucosa. Therefore, several CAD systems for GC are being actively developed. The development of a CAD system for GC is considered challenging because it requires a large number of GC images. In particular, early stage GC images are rarely available, partly because it is difficult to diagnose gastric cancer during the early stages. Additionally, the training image data should be of a sufficiently high quality to conduct proper CAD training. Recently, several AI systems for GC that exhibit a robust performance, owing to being trained on a large number of high-quality images, have been reported. This review outlines the current status and prospects of AI use in esophagogastroduodenoscopy (EGDS), focusing on the diagnosis of GC.
Collapse
Affiliation(s)
- Kentaro Ochiai
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Tsuyoshi Ozawa
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
- Tomohiro Tada the Institute of Gastroenterology and Proctology, Musashi-Urawa, Saitama 336-0021, Japan
- AI Medical Service Inc. Toshima-ku, Tokyo 104-0061, Japan
| | - Junichi Shibata
- Tomohiro Tada the Institute of Gastroenterology and Proctology, Musashi-Urawa, Saitama 336-0021, Japan
- AI Medical Service Inc. Toshima-ku, Tokyo 104-0061, Japan
| | - Soichiro Ishihara
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Tomohiro Tada
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
- Tomohiro Tada the Institute of Gastroenterology and Proctology, Musashi-Urawa, Saitama 336-0021, Japan
- AI Medical Service Inc. Toshima-ku, Tokyo 104-0061, Japan
| |
Collapse
|
34
|
Luo Q, Yang H, Hu B. Application of artificial intelligence in the endoscopic diagnosis of early gastric cancer, atrophic gastritis, and Helicobacter pylori infection. J Dig Dis 2022; 23:666-674. [PMID: 36661411 DOI: 10.1111/1751-2980.13154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 01/04/2023] [Accepted: 01/17/2023] [Indexed: 01/21/2023]
Abstract
Gastric cancer (GC) is one of the most serious health problems worldwide. Chronic atrophic gastritis (CAG) is most commonly caused by Helicobacter pylori (H. pylori) infection. Currently, endoscopic detection of early gastric cancer (EGC) and CAG remains challenging for endoscopists, and the diagnostic accuracy of H. pylori infection by endoscopy is approximately 70%. Artificial intelligence (AI) can assist endoscopic diagnosis including detection, prediction of depth of invasion, boundary delineation, and anatomical location of EGC, and has achievable diagnostic ability even comparable to experienced endoscopists. In this review we summarized various AI-assisted systems in the diagnosis of EGC, CAG, and H. pylori infection.
Collapse
Affiliation(s)
- Qi Luo
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, Sichuan Province, China
| | - Hang Yang
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, Sichuan Province, China
| | - Bing Hu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, Sichuan Province, China
| |
Collapse
|
35
|
Guan X, Lu N, Zhang J. Accurate preoperative staging and HER2 status prediction of gastric cancer by the deep learning system based on enhanced computed tomography. Front Oncol 2022; 12:950185. [PMID: 36452488 PMCID: PMC9702985 DOI: 10.3389/fonc.2022.950185] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/24/2022] [Indexed: 10/24/2023] Open
Abstract
PURPOSE To construct the deep learning system (DLS) based on enhanced computed tomography (CT) images for preoperative prediction of staging and human epidermal growth factor receptor 2 (HER2) status in gastric cancer patients. METHODS The raw enhanced CT image dataset consisted of CT images of 389 patients in the retrospective cohort, The Cancer Imaging Archive (TCIA) cohort, and the prospective cohort. DLS was developed by transfer learning for tumor detection, staging, and HER2 status prediction. The pre-trained Yolov5, EfficientNet, EfficientNetV2, Vision Transformer (VIT), and Swin Transformer (SWT) were studied. The tumor detection and staging dataset consisted of 4860 enhanced CT images and annotated tumor bounding boxes. The HER2 state prediction dataset consisted of 38900 enhanced CT images. RESULTS The DetectionNet based on Yolov5 realized tumor detection and staging and achieved a mean Average Precision (IoU=0.5) (mAP_0.5) of 0.909 in the external validation cohort. The VIT-based PredictionNet performed optimally in HER2 status prediction with the area under the receiver operating characteristics curve (AUC) of 0.9721 and 0.9995 in the TCIA cohort and prospective cohort, respectively. DLS included DetectionNet and PredictionNet had shown excellent performance in CT image interpretation. CONCLUSION This study developed the enhanced CT-based DLS to preoperatively predict the stage and HER2 status of gastric cancer patients, which will help in choosing the appropriate treatment to improve the survival of gastric cancer patients.
Collapse
Affiliation(s)
| | | | - Jianping Zhang
- Department of General Surgery, The Second Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| |
Collapse
|
36
|
Deng Y, Chen Y, Xie L, Wang L, Zhan J. The investigation of construction and clinical application of image recognition technology assisted bronchoscopy diagnostic model of lung cancer. Front Oncol 2022; 12:1001840. [PMID: 36387178 PMCID: PMC9647035 DOI: 10.3389/fonc.2022.1001840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 10/07/2022] [Indexed: 12/02/2022] Open
Abstract
Background The incidence and mortality of lung cancer ranks first in China. Bronchoscopy is one of the most common diagnostic methods for lung cancer. In recent years, image recognition technology(IRT) has been more and more widely studied and applied in the medical field. We developed a diagnostic model of lung cancer under bronchoscopy based on deep learning method and tried to classify pathological types. Methods A total of 2238 lesion images were collected retrospectively from 666 cases of lung cancer diagnosed by pathology in the bronchoscopy center of the Third Xiangya Hospital from Oct.01 2017 to Dec.31 2020 and 152 benign cases from Jun.01 2015 to Dec.31 2020. The benign and malignant images were divided into training, verification and test set according to 7:1:2 respectively. The model was trained and tested based on deep learning method. We also tried to classify different pathological types of lung cancer using the model. Furthermore, 9 clinicians with different experience were invited to diagnose the same test images and the results were compared with the model. Results The diagnostic model took a total of 30s to diagnose 467 test images. The overall accuracy, sensitivity, specificity and area under curve (AUC) of the model to differentiate benign and malignant lesions were 0.951, 0.978, 0.833 and 0.940, which were equivalent to the judgment results of 2 doctors in the senior group and higher than those of other doctors. In the classification of squamous cell carcinoma (SCC) and adenocarcinoma (AC), the overall accuracy was 0.745, including 0.790 for SCC, 0.667 for AC and AUC was 0.728. Conclusion The performance of our diagnostic model to distinguish benign and malignant lesions in bronchoscopy is roughly the same as that of experienced clinicians and the efficiency is much higher than manually. Our study verifies the possibility of applying IRT in diagnosis of lung cancer during white light bronchoscopy.
Collapse
Affiliation(s)
- Yihong Deng
- Department of Pulmonary and Critical Care Medicine, the Third Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Yuan Chen
- Department of Computer Science, School of Informatics, Xiamen University, Xiamen, Fujian, China
| | - Lihua Xie
- Department of Pulmonary and Critical Care Medicine, the Third Xiangya Hospital, Central South University, Changsha, Hunan, China
- *Correspondence: Lihua Xie, ; Liansheng Wang, ; Juan Zhan,
| | - Liansheng Wang
- Department of Computer Science, School of Informatics, Xiamen University, Xiamen, Fujian, China
- *Correspondence: Lihua Xie, ; Liansheng Wang, ; Juan Zhan,
| | - Juan Zhan
- Department of Oncology, Zhongshan Hospital affiliated to Xiamen University, Xiamen, Fujian, China
- *Correspondence: Lihua Xie, ; Liansheng Wang, ; Juan Zhan,
| |
Collapse
|
37
|
Jin J, Zhang Q, Dong B, Ma T, Mei X, Wang X, Song S, Peng J, Wu A, Dong L, Kong D. Automatic detection of early gastric cancer in endoscopy based on Mask region-based convolutional neural networks (Mask R-CNN)(with video). Front Oncol 2022; 12:927868. [PMID: 36338757 PMCID: PMC9630732 DOI: 10.3389/fonc.2022.927868] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 09/05/2022] [Indexed: 12/04/2022] Open
Abstract
The artificial intelligence (AI)-assisted endoscopic detection of early gastric cancer (EGC) has been preliminarily developed. The currently used algorithms still exhibit limitations of large calculation and low-precision expression. The present study aimed to develop an endoscopic automatic detection system in EGC based on a mask region-based convolutional neural network (Mask R-CNN) and to evaluate the performance in controlled trials. For this purpose, a total of 4,471 white light images (WLIs) and 2,662 narrow band images (NBIs) of EGC were obtained for training and testing. In total, 10 of the WLIs (videos) were obtained prospectively to examine the performance of the RCNN system. Furthermore, 400 WLIs were randomly selected for comparison between the Mask R-CNN system and doctors. The evaluation criteria included accuracy, sensitivity, specificity, positive predictive value and negative predictive value. The results revealed that there were no significant differences between the pathological diagnosis with the Mask R-CNN system in the WLI test (χ2 = 0.189, P=0.664; accuracy, 90.25%; sensitivity, 91.06%; specificity, 89.01%) and in the NBI test (χ2 = 0.063, P=0.802; accuracy, 95.12%; sensitivity, 97.59%). Among 10 WLI real-time videos, the speed of the test videos was up to 35 frames/sec, with an accuracy of 90.27%. In a controlled experiment of 400 WLIs, the sensitivity of the Mask R-CNN system was significantly higher than that of experts (χ2 = 7.059, P=0.000; 93.00% VS 80.20%), and the specificity was higher than that of the juniors (χ2 = 9.955, P=0.000, 82.67% VS 71.87%), and the overall accuracy rate was higher than that of the seniors (χ2 = 7.009, P=0.000, 85.25% VS 78.00%). On the whole, the present study demonstrates that the Mask R-CNN system exhibited an excellent performance status for the detection of EGC, particularly for the real-time analysis of WLIs. It may thus be effectively applied to clinical settings.
Collapse
Affiliation(s)
- Jing Jin
- Key Laboratory of Digestive Diseases of Anhui Province, Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Qianqian Zhang
- Key Laboratory of Digestive Diseases of Anhui Province, Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Bill Dong
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, China
| | - Tao Ma
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, China
| | - Xuecan Mei
- Key Laboratory of Digestive Diseases of Anhui Province, Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xi Wang
- Key Laboratory of Digestive Diseases of Anhui Province, Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Shaofang Song
- Research and Development Department, Hefei Zhongna Medical Instrument Co. LTD, Hefei, China
| | - Jie Peng
- Research and Development Department, Hefei Zhongna Medical Instrument Co. LTD, Hefei, China
| | - Aijiu Wu
- Research and Development Department, Hefei Zhongna Medical Instrument Co. LTD, Hefei, China
| | - Lanfang Dong
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, China
| | - Derun Kong
- Key Laboratory of Digestive Diseases of Anhui Province, Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- *Correspondence: Derun Kong,
| |
Collapse
|
38
|
Ma M, Li Z, Yu T, Liu G, Ji R, Li G, Guo Z, Wang L, Qi Q, Yang X, Qu J, Wang X, Zuo X, Ren H, Li Y. Application of deep learning in the real-time diagnosis of gastric lesion based on magnifying optical enhancement videos. Front Oncol 2022; 12:945904. [PMID: 35992850 PMCID: PMC9389533 DOI: 10.3389/fonc.2022.945904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 07/04/2022] [Indexed: 12/24/2022] Open
Abstract
Background and aim Magnifying image-enhanced endoscopy was demonstrated to have higher diagnostic accuracy than white-light endoscopy. However, differentiating early gastric cancers (EGCs) from benign lesions is difficult for beginners. We aimed to determine whether the computer-aided model for the diagnosis of gastric lesions can be applied to videos rather than still images. Methods A total of 719 magnifying optical enhancement images of EGCs, 1,490 optical enhancement images of the benign gastric lesions, and 1,514 images of background mucosa were retrospectively collected to train and develop a computer-aided diagnostic model. Subsequently, 101 video segments and 671 independent images were used for validation, and error frames were labeled to retrain the model. Finally, a total of 117 unaltered full-length videos were utilized to test the model and compared with those diagnostic results made by independent endoscopists. Results Except for atrophy combined with intestinal metaplasia (IM) and low-grade neoplasia, the diagnostic accuracy was 0.90 (85/94). The sensitivity, specificity, PLR, NLR, and overall accuracy of the model to distinguish EGC from non-cancerous lesions were 0.91 (48/53), 0.78 (50/64), 4.14, 0.12, and 0.84 (98/117), respectively. No significant difference was observed in the overall diagnostic accuracy between the computer-aided model and experts. A good level of kappa values was found between the model and experts, which meant that the kappa value was 0.63. Conclusions The performance of the computer-aided model for the diagnosis of EGC is comparable to that of experts. Magnifying the optical enhancement model alone may not be able to deal with all lesions in the stomach, especially when near the focus on severe atrophy with IM. These results warrant further validation in prospective studies with more patients. A ClinicalTrials.gov registration was obtained (identifier number: NCT04563416). Clinical Trial Registration ClinicalTrials.gov, identifier NCT04563416.
Collapse
Affiliation(s)
- Mingjun Ma
- Department of Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Laboratory of Translational Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Robot Engineering Laboratory for Precise Diagnosis and Therapy of Gastrointestinal Tumor, Qilu Hospital of Shandong University, Jinan, China
| | - Zhen Li
- Department of Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Laboratory of Translational Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Robot Engineering Laboratory for Precise Diagnosis and Therapy of Gastrointestinal Tumor, Qilu Hospital of Shandong University, Jinan, China
| | - Tao Yu
- Department of Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Laboratory of Translational Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Robot Engineering Laboratory for Precise Diagnosis and Therapy of Gastrointestinal Tumor, Qilu Hospital of Shandong University, Jinan, China
| | - Guanqun Liu
- Department of Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Laboratory of Translational Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Robot Engineering Laboratory for Precise Diagnosis and Therapy of Gastrointestinal Tumor, Qilu Hospital of Shandong University, Jinan, China
| | - Rui Ji
- Department of Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Laboratory of Translational Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Robot Engineering Laboratory for Precise Diagnosis and Therapy of Gastrointestinal Tumor, Qilu Hospital of Shandong University, Jinan, China
| | - Guangchao Li
- Department of Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Laboratory of Translational Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Robot Engineering Laboratory for Precise Diagnosis and Therapy of Gastrointestinal Tumor, Qilu Hospital of Shandong University, Jinan, China
| | - Zhuang Guo
- Department of Gastroenterology, Shengli Oilfield Central Hospital, Dongying, China
| | - Limei Wang
- Department of Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Laboratory of Translational Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Robot Engineering Laboratory for Precise Diagnosis and Therapy of Gastrointestinal Tumor, Qilu Hospital of Shandong University, Jinan, China
| | - Qingqing Qi
- Department of Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Laboratory of Translational Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Robot Engineering Laboratory for Precise Diagnosis and Therapy of Gastrointestinal Tumor, Qilu Hospital of Shandong University, Jinan, China
| | - Xiaoxiao Yang
- Department of Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Laboratory of Translational Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Robot Engineering Laboratory for Precise Diagnosis and Therapy of Gastrointestinal Tumor, Qilu Hospital of Shandong University, Jinan, China
| | - Junyan Qu
- Department of Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Laboratory of Translational Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Robot Engineering Laboratory for Precise Diagnosis and Therapy of Gastrointestinal Tumor, Qilu Hospital of Shandong University, Jinan, China
| | - Xiao Wang
- Department of Pathology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Xiuli Zuo
- Department of Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Laboratory of Translational Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Robot Engineering Laboratory for Precise Diagnosis and Therapy of Gastrointestinal Tumor, Qilu Hospital of Shandong University, Jinan, China
| | - Hongliang Ren
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore
| | - Yanqing Li
- Department of Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Laboratory of Translational Gastroenterology, Qilu Hospital of Shandong University, Jinan, China
- Robot Engineering Laboratory for Precise Diagnosis and Therapy of Gastrointestinal Tumor, Qilu Hospital of Shandong University, Jinan, China
- *Correspondence: Yanqing Li,
| |
Collapse
|
39
|
Ma H, Wang L, Chen Y, Tian L. Convolutional neural network-based artificial intelligence for the diagnosis of early esophageal cancer based on endoscopic images: A meta-analysis. Saudi J Gastroenterol 2022; 28:332-340. [PMID: 35848703 PMCID: PMC9752541 DOI: 10.4103/sjg.sjg_178_22] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
BACKGROUND Early screening and treatment of esophageal cancer (EC) is particularly important for the survival and prognosis of patients. However, early EC is difficult to diagnose by a routine endoscopic examination. Therefore, convolutional neural network (CNN)-based artificial intelligence (AI) has become a very promising method in the diagnosis of early EC using endoscopic images. The aim of this study was to evaluate the diagnostic performance of CNN-based AI for detecting early EC based on endoscopic images. METHODS A comprehensive search was performed to identify relevant English articles concerning CNN-based AI in the diagnosis of early EC based on endoscopic images (from the date of database establishment to April 2022). The pooled sensitivity (SEN), pooled specificity (SPE), positive likelihood ratio (LR+), negative likelihood ratio (LR-), diagnostic odds ratio (DOR) with 95% confidence interval (CI), summary receiver operating characteristic (SROC) curve, and area under the curve (AUC) for the accuracy of CNN-based AI in the diagnosis of early EC based on endoscopic images were calculated. We used the I2 test to assess heterogeneity and investigated the source of heterogeneity by performing meta-regression analysis. Publication bias was assessed using Deeks' funnel plot asymmetry test. RESULTS Seven studies met the eligibility criteria. The SEN and SPE were 0.90 (95% confidence interval [CI]: 0.82-0.94) and 0.91 (95% CI: 0.79-0.96), respectively. The LR+ of the malignant ultrasonic features was 9.8 (95% CI: 3.8-24.8) and the LR- was 0.11 (95% CI: 0.06-0.21), revealing that CNN-based AI exhibited an excellent ability to confirm or exclude early EC on endoscopic images. Additionally, SROC curves showed that the AUC of the CNN-based AI in the diagnosis of early EC based on endoscopic images was 0.95 (95% CI: 0.93-0.97), demonstrating that CNN-based AI has good diagnostic value for early EC based on endoscopic images. CONCLUSIONS Based on our meta-analysis, CNN-based AI is an excellent diagnostic tool with high sensitivity, specificity, and AUC in the diagnosis of early EC based on endoscopic images.
Collapse
Affiliation(s)
- Hongbiao Ma
- Department of Thoracic Surgery, Chongqing General Hospital, No.118, Xingguang Avenue, Liangjiang New Area, Chongqing, China
| | - Longlun Wang
- Department of Radiology, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, China
| | - Yilin Chen
- Department of Thoracic Surgery, Chongqing General Hospital, No.118, Xingguang Avenue, Liangjiang New Area, Chongqing, China
| | - Lu Tian
- Department of Radiology, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, China,Address for correspondence: Dr. Lu Tian, Department of Radiology, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, 400014, China. E-mail:
| |
Collapse
|
40
|
Sierra-Jerez F, Ruiz J, Martinez F. A Non-Aligned Deep Representation to Enhance Standard Colonoscopy Observations from Vascular Narrow Band Polyp Patterns. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1671-1674. [PMID: 36085968 DOI: 10.1109/embc48229.2022.9871752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Colorectal cancer (CRC) was responsible during 2020 for about one million deaths worldwide. Polyps are protuberance masses, observed in routine colonoscopies, that constitute the main CRC biomarker. Nonetheless, one of the best alternatives to the polyp malignancy classification is the vascular pattern analysis, typically observed from specialized narrow-band images (NBI). Even worst, these patterns are only characterized from gastroenterologist observations, introducing subjectivity and being prone to diagnostic errors, with misclassi-fications ranging from 59.5 % to 84.2 %. This work introduces a non-aligned and bi-directional deep projection between optical colonoscopy (OC) and NBI sequences, to recover enhanced OC sequences, integrating vascular patterns, that allow better dis-crimination among adenomas, hyperplastic and serrated polyps. This self-supervised representation help with misclassification in standard OC observations. The validation was performed on a total of 76 OC and 76 NBI sequences, achieving a gain of 22.34% w.r.t descriptors computed from raw OC. Clinical relevance- A deep representation that enhances standard OC observations associating vascularity to the polyps to discriminate among adenomas hyperplastic and serrated polyps.
Collapse
|
41
|
Luo D, Kuang F, Du J, Zhou M, Liu X, Luo X, Tang Y, Li B, Su S. Artificial Intelligence-Assisted Endoscopic Diagnosis of Early Upper Gastrointestinal Cancer: A Systematic Review and Meta-Analysis. Front Oncol 2022; 12:855175. [PMID: 35756602 PMCID: PMC9229174 DOI: 10.3389/fonc.2022.855175] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/28/2022] [Indexed: 11/17/2022] Open
Abstract
Objective The aim of this study was to assess the diagnostic ability of artificial intelligence (AI) in the detection of early upper gastrointestinal cancer (EUGIC) using endoscopic images. Methods Databases were searched for studies on AI-assisted diagnosis of EUGIC using endoscopic images. The pooled area under the curve (AUC), sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR) with 95% confidence interval (CI) were calculated. Results Overall, 34 studies were included in our final analysis. Among the 17 image-based studies investigating early esophageal cancer (EEC) detection, the pooled AUC, sensitivity, specificity, PLR, NLR, and DOR were 0.98, 0.95 (95% CI, 0.95–0.96), 0.95 (95% CI, 0.94–0.95), 10.76 (95% CI, 7.33–15.79), 0.07 (95% CI, 0.04–0.11), and 173.93 (95% CI, 81.79–369.83), respectively. Among the seven patient-based studies investigating EEC detection, the pooled AUC, sensitivity, specificity, PLR, NLR, and DOR were 0.98, 0.94 (95% CI, 0.91–0.96), 0.90 (95% CI, 0.88–0.92), 6.14 (95% CI, 2.06–18.30), 0.07 (95% CI, 0.04–0.11), and 69.13 (95% CI, 14.73–324.45), respectively. Among the 15 image-based studies investigating early gastric cancer (EGC) detection, the pooled AUC, sensitivity, specificity, PLR, NLR, and DOR were 0.94, 0.87 (95% CI, 0.87–0.88), 0.88 (95% CI, 0.87–0.88), 7.20 (95% CI, 4.32–12.00), 0.14 (95% CI, 0.09–0.23), and 48.77 (95% CI, 24.98–95.19), respectively. Conclusions On the basis of our meta-analysis, AI exhibited high accuracy in diagnosis of EUGIC. Systematic Review Registration https://www.crd.york.ac.uk/PROSPERO/, identifier PROSPERO (CRD42021270443).
Collapse
Affiliation(s)
- De Luo
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Fei Kuang
- Department of General Surgery, Changhai Hospital of The Second Military Medical University, Shanghai, China
| | - Juan Du
- Department of Clinical Medicine, Southwest Medical University, Luzhou, China
| | - Mengjia Zhou
- Department of Ultrasound, Seventh People's Hospital of Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Xiangdong Liu
- Department of Hepatobiliary Surgery, Zigong Fourth People's Hospital, Zigong, China
| | - Xinchen Luo
- Department of Gastroenterology, Zigong Third People's Hospital, Zigong, China
| | - Yong Tang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Bo Li
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Song Su
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| |
Collapse
|
42
|
Tang D, Ni M, Zheng C, Ding X, Zhang N, Yang T, Zhan Q, Fu Y, Liu W, Zhuang D, Lv Y, Xu G, Wang L, Zou X. A deep learning-based model improves diagnosis of early gastric cancer under narrow band imaging endoscopy. Surg Endosc 2022; 36:7800-7810. [PMID: 35641698 DOI: 10.1007/s00464-022-09319-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 04/27/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND Diagnosis of early gastric cancer (EGC) under narrow band imaging endoscopy (NBI) is dependent on expertise and skills. We aimed to elucidate whether artificial intelligence (AI) could diagnose EGC under NBI and evaluate the diagnostic assistance of the AI system. METHODS In this retrospective diagnostic study, 21,785 NBI images and 20 videos from five centers were divided into a training dataset (13,151 images, 810 patients), an internal validation dataset (7057 images, 283 patients), four external validation datasets (1577 images, 147 patients), and a video validation dataset (20 videos, 20 patients). All the images were labeled manually and used to train an AI system using You look only once v3 (YOLOv3). Next, the diagnostic performance of the AI system and endoscopists were compared and the diagnostic assistance of the AI system was assessed. The accuracy, sensitivity, specificity, and AUC were primary outcomes. RESULTS The AI system diagnosed EGCs on validation datasets with AUCs of 0.888-0.951 and diagnosed all the EGCs (100.0%) in video dataset. The AI system achieved better diagnostic performance (accuracy, 93.2%, 95% CI, 90.0-94.9%) than senior (85.9%, 95% CI, 84.2-87.4%) and junior (79.5%, 95% CI, 77.8-81.0%) endoscopists. The AI system significantly enhanced the performance of endoscopists in senior (89.4%, 95% CI, 87.9-90.7%) and junior (84.9%, 95% CI, 83.4-86.3%) endoscopists. CONCLUSION The NBI AI system outperformed the endoscopists and exerted potential assistant impact in EGC identification. Prospective validations are needed to evaluate the clinical reinforce of the system in real clinical practice.
Collapse
Affiliation(s)
- Dehua Tang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Muhan Ni
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Chang Zheng
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Xiwei Ding
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Nina Zhang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Tian Yang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Qiang Zhan
- Department of Gastroenterology, Wuxi People's Hospital, Affiliated Wuxi People's Hospital With Nanjing Medical University, Wuxi, 214023, Jiangsu, China
| | - Yiwei Fu
- Department of Gastroenterology, Taizhou People's Hospital, The Fifth Affiliated Hospital With Nantong University, Taizhou, 225300, Jiangsu, China
| | - Wenjia Liu
- Department of Gastroenterology, Changzhou Second People's Hospital, Changzhou, 213003, Jiangsu, China
| | - Duanming Zhuang
- Department of Gastroenterology, Nanjing Gaochun People's Hospital, Nanjing, 211300, Jiangsu, China
| | - Ying Lv
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Guifang Xu
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China.
| | - Lei Wang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China.
| | - Xiaoping Zou
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China.
| |
Collapse
|
43
|
Fati SM, Senan EM, Azar AT. Hybrid and Deep Learning Approach for Early Diagnosis of Lower Gastrointestinal Diseases. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22114079. [PMID: 35684696 PMCID: PMC9185306 DOI: 10.3390/s22114079] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 05/21/2022] [Accepted: 05/24/2022] [Indexed: 05/27/2023]
Abstract
Every year, nearly two million people die as a result of gastrointestinal (GI) disorders. Lower gastrointestinal tract tumors are one of the leading causes of death worldwide. Thus, early detection of the type of tumor is of great importance in the survival of patients. Additionally, removing benign tumors in their early stages has more risks than benefits. Video endoscopy technology is essential for imaging the GI tract and identifying disorders such as bleeding, ulcers, polyps, and malignant tumors. Videography generates 5000 frames, which require extensive analysis and take a long time to follow all frames. Thus, artificial intelligence techniques, which have a higher ability to diagnose and assist physicians in making accurate diagnostic decisions, solve these challenges. In this study, many multi-methodologies were developed, where the work was divided into four proposed systems; each system has more than one diagnostic method. The first proposed system utilizes artificial neural networks (ANN) and feed-forward neural networks (FFNN) algorithms based on extracting hybrid features by three algorithms: local binary pattern (LBP), gray level co-occurrence matrix (GLCM), and fuzzy color histogram (FCH) algorithms. The second proposed system uses pre-trained CNN models which are the GoogLeNet and AlexNet based on the extraction of deep feature maps and their classification with high accuracy. The third proposed method uses hybrid techniques consisting of two blocks: the first block of CNN models (GoogLeNet and AlexNet) to extract feature maps; the second block is the support vector machine (SVM) algorithm for classifying deep feature maps. The fourth proposed system uses ANN and FFNN based on the hybrid features between CNN models (GoogLeNet and AlexNet) and LBP, GLCM and FCH algorithms. All the proposed systems achieved superior results in diagnosing endoscopic images for the early detection of lower gastrointestinal diseases. All systems produced promising results; the FFNN classifier based on the hybrid features extracted by GoogLeNet, LBP, GLCM and FCH achieved an accuracy of 99.3%, precision of 99.2%, sensitivity of 99%, specificity of 100%, and AUC of 99.87%.
Collapse
Affiliation(s)
- Suliman Mohamed Fati
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | - Ebrahim Mohammed Senan
- Department of Computer Science & Information Technology, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad 431004, India;
| | - Ahmad Taher Azar
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
- Faculty of Computers and Artificial Intelligence, Benha University, Benha 13518, Egypt
| |
Collapse
|
44
|
Brand M, Troya J, Krenzer A, Saßmannshausen Z, Zoller WG, Meining A, Lux TJ, Hann A. Development and evaluation of a deep learning model to improve the usability of polyp detection systems during interventions. United European Gastroenterol J 2022; 10:477-484. [PMID: 35511456 PMCID: PMC9189459 DOI: 10.1002/ueg2.12235] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 03/31/2022] [Indexed: 12/16/2022] Open
Abstract
Background The efficiency of artificial intelligence as computer‐aided detection (CADe) systems for colorectal polyps has been demonstrated in several randomized trials. However, CADe systems generate many distracting detections, especially during interventions such as polypectomies. Those distracting CADe detections are often induced by the introduction of snares or biopsy forceps as the systems have not been trained for such situations. In addition, there are a significant number of non‐false but not relevant detections, since the polyp has already been previously detected. All these detections have the potential to disturb the examiner's work. Objectives Development and evaluation of a convolutional neuronal network that recognizes instruments in the endoscopic image, suppresses distracting CADe detections, and reliably detects endoscopic interventions. Methods A total of 580 different examination videos from 9 different centers using 4 different processor types were screened for instruments and represented the training dataset (519,856 images in total, 144,217 contained a visible instrument). The test dataset included 10 full‐colonoscopy videos that were analyzed for the recognition of visible instruments and detections by a commercially available CADe system (GI Genius, Medtronic). Results The test dataset contained 153,623 images, 8.84% of those presented visible instruments (12 interventions, 19 instruments used). The convolutional neuronal network reached an overall accuracy in the detection of visible instruments of 98.59%. Sensitivity and specificity were 98.55% and 98.92%, respectively. A mean of 462.8 frames containing distracting CADe detections per colonoscopy were avoided using the convolutional neuronal network. This accounted for 95.6% of all distracting CADe detections. Conclusions Detection of endoscopic instruments in colonoscopy using artificial intelligence technology is reliable and achieves high sensitivity and specificity. Accordingly, the new convolutional neuronal network could be used to reduce distracting CADe detections during endoscopic procedures. Thus, our study demonstrates the great potential of artificial intelligence technology beyond mucosal assessment.
Collapse
Affiliation(s)
- Markus Brand
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, Gastroenterology, University Hospital Würzburg, Würzburg, Germany
| | - Joel Troya
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, Gastroenterology, University Hospital Würzburg, Würzburg, Germany
| | - Adrian Krenzer
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, Gastroenterology, University Hospital Würzburg, Würzburg, Germany.,Artificial Intelligence and Knowledge Systems, Institute for Computer Science, Julius-Maximilians-Universität Würzburg, Würzburg, Germany
| | - Zita Saßmannshausen
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, Gastroenterology, University Hospital Würzburg, Würzburg, Germany
| | - Wolfram G Zoller
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Stuttgart, Germany
| | - Alexander Meining
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, Gastroenterology, University Hospital Würzburg, Würzburg, Germany
| | - Thomas J Lux
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, Gastroenterology, University Hospital Würzburg, Würzburg, Germany
| | - Alexander Hann
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, Gastroenterology, University Hospital Würzburg, Würzburg, Germany
| |
Collapse
|
45
|
Kutsumi H. Contribution of the Japan Gastroenterological Endoscopy Society to promote computer-aided diagnosis/detection system development using artificial intelligence technology. Dig Endosc 2022; 34 Suppl 2:132-135. [PMID: 34652003 DOI: 10.1111/den.14146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Affiliation(s)
- Hiromu Kutsumi
- Center for Clinical Research and Advanced Medicine, Shiga University of Medical Science, Shiga, Japan
| |
Collapse
|
46
|
Abstract
Artificial intelligence (AI) is rapidly developing in various medical fields, and there is an increase in research performed in the field of gastrointestinal (GI) endoscopy. In particular, the advent of convolutional neural network, which is a class of deep learning method, has the potential to revolutionize the field of GI endoscopy, including esophagogastroduodenoscopy (EGD), capsule endoscopy (CE), and colonoscopy. A total of 149 original articles pertaining to AI (27 articles in esophagus, 30 articles in stomach, 29 articles in CE, and 63 articles in colon) were identified in this review. The main focuses of AI in EGD are cancer detection, identifying the depth of cancer invasion, prediction of pathological diagnosis, and prediction of Helicobacter pylori infection. In the field of CE, automated detection of bleeding sites, ulcers, tumors, and various small bowel diseases is being investigated. AI in colonoscopy has advanced with several patient-based prospective studies being conducted on the automated detection and classification of colon polyps. Furthermore, research on inflammatory bowel disease has also been recently reported. Most studies of AI in the field of GI endoscopy are still in the preclinical stages because of the retrospective design using still images. Video-based prospective studies are needed to advance the field. However, AI will continue to develop and be used in daily clinical practice in the near future. In this review, we have highlighted the published literature along with providing current status and insights into the future of AI in GI endoscopy.
Collapse
Affiliation(s)
- Yutaka Okagawa
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.,Department of Gastroenterology, Tonan Hospital, Sapporo, Japan
| | - Seiichiro Abe
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.
| | - Masayoshi Yamada
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Ichiro Oda
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| |
Collapse
|
47
|
Panarese A. Usefulness of artificial intelligence in early gastric cancer. Artif Intell Cancer 2022; 3:17-26. [DOI: 10.35713/aic.v3.i2.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 03/27/2022] [Accepted: 04/19/2022] [Indexed: 02/06/2023] Open
Abstract
Gastric cancer (GC) is a major cancer worldwide, with high mortality and morbidity. Endoscopy, important for the early detection of GC, requires trained skills, high-quality technologies, surveillance and screening programs. Early diagnosis allows a better prognosis, through surgical or curative endoscopic therapy. Magnified endoscopy with virtual chromoendoscopy remarkably improve the detection of early gastric cancer (EGC) when endoscopy is performed by expert endoscopists. Artificial intelligence (AI) has also been introduced to GC diagnostics to increase diagnostic efficiency. AI improves the early detection of gastric lesions because it supports the non-expert and experienced endoscopist in defining the margins of the tumor and the depth of infiltration. AI increases the detection rate of EGC, reduces the rate of missing tumors, and characterizes EGCs, allowing clinicians to make the best therapeutic decision, that is, one that ensures curability. AI has had a remarkable evolution in medicine in recent years, moving from the research phase to clinical practice. In addition, the diagnosis of GC has markedly progressed. We predict that AI will allow great evolution in the diagnosis and treatment of EGC by overcoming the variability in performance that is currently a limitation of chromoendoscopy.
Collapse
Affiliation(s)
- Alba Panarese
- Department of Gastroenterology and Endoscopy, Central Hospital, Taranto 74123, Italy
| |
Collapse
|
48
|
Yu T, Lin N, Zhong X, Zhang X, Zhang X, Chen Y, Liu J, Hu W, Duan H, Si J. Multi-label recognition of cancer-related lesions with clinical priors on white-light endoscopy. Comput Biol Med 2022; 143:105255. [PMID: 35151153 DOI: 10.1016/j.compbiomed.2022.105255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 01/07/2022] [Accepted: 01/20/2022] [Indexed: 11/28/2022]
Abstract
Deep learning-based computer-aided diagnosis techniques have demonstrated encouraging performance in endoscopic lesion identification and detection, and have reduced the rate of missed and false detections of disease during endoscopy. However, the interpretability of the model-based results has not been adequately addressed by existing methods. This phenomenon is directly manifested by a significant bias in the representation of feature localization. Good recognition models experience severe feature localization errors, particularly for lesions with subtle morphological features, and such unsatisfactory performance hinders the clinical deployment of models. To effectively alleviate this problem, we proposed a solution to optimize the localization bias in feature representations of cancer-related recognition models that is difficult to accurately label and identify in clinical practice. Optimization was performed in the training phase of the model through the proposed data augmentation method and auxiliary loss function based on clinical priors. The data augmentation method, called partial jigsaw, can "break" the spatial structure of lesion-independent image blocks and enrich the data feature space to decouple the interference of background features on the space and focus on fine-grained lesion features. The annotation-based auxiliary loss function used class activation maps for sample distribution correction and led the model to present localization representation converging on the gold standard annotation of visualization maps. The results show that with the improvement of our method, the precision of model recognition reached an average of 92.79%, an F1-score of 92.61%, and accuracy of 95.56% based on a dataset constructed from 23 hospitals. In addition, we quantified the evaluation representation of visualization feature maps. The improved model yielded significant offset correction results for visualized feature maps compared with the baseline model. The average visualization-weighted positive coverage improved from 51.85% to 83.76%. The proposed approach did not change the deployment capability and inference speed of the original model and can be incorporated into any state-of-the-art neural network. It also shows the potential to provide more accurate localization inference results and assist in clinical examinations during endoscopies.
Collapse
Affiliation(s)
- Tao Yu
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Ne Lin
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Medical School, Zhejiang University, Hangzhou, China
| | - Xingwei Zhong
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Medical School, Zhejiang University, Hangzhou, China
| | - Xiaoyan Zhang
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Xinsen Zhang
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Yihe Chen
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Jiquan Liu
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China.
| | - Weiling Hu
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Medical School, Zhejiang University, Hangzhou, China; Institute of Gastroenterology, Zhejiang University, Hangzhou, China
| | - Huilong Duan
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Jianmin Si
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Medical School, Zhejiang University, Hangzhou, China; Institute of Gastroenterology, Zhejiang University, Hangzhou, China
| |
Collapse
|
49
|
Abe S, Tomizawa Y, Saito Y. Can artificial intelligence be your angel to diagnose early gastric cancer in real clinical practice? Gastrointest Endosc 2022; 95:679-681. [PMID: 35177258 DOI: 10.1016/j.gie.2021.12.042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Accepted: 12/31/2021] [Indexed: 12/11/2022]
Affiliation(s)
- Seiichiro Abe
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| | - Yutaka Tomizawa
- Division of Gastroenterology, Harborview Medical Center, University of Washington, Seattle, Washington, USA
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| |
Collapse
|
50
|
Sharma P, Hassan C. Artificial Intelligence and Deep Learning for Upper Gastrointestinal Neoplasia. Gastroenterology 2022; 162:1056-1066. [PMID: 34902362 DOI: 10.1053/j.gastro.2021.11.040] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 11/09/2021] [Accepted: 11/19/2021] [Indexed: 12/24/2022]
Abstract
Upper gastrointestinal (GI) neoplasia account for 35% of GI cancers and 1.5 million cancer-related deaths every year. Despite its efficacy in preventing cancer mortality, diagnostic upper GI endoscopy is affected by a substantial miss rate of neoplastic lesions due to failure to recognize a visible lesion or imperfect navigation. This may be offset by the real-time application of artificial intelligence (AI) for detection (computer-aided detection [CADe]) and characterization (computer-aided diagnosis [CADx]) of upper GI neoplasia. Stand-alone performance of CADe for esophageal squamous cell neoplasia, Barrett's esophagus-related neoplasia, and gastric cancer showed promising accuracy, sensitivity ranging between 83% and 93%. However, incorporation of CADe/CADx in clinical practice depends on several factors, such as possible bias in the training or validation phases of these algorithms, its interaction with human endoscopists, and clinical implications of false-positive results. The aim of this review is to guide the clinician across the multiple steps of AI development in clinical practice.
Collapse
Affiliation(s)
- Prateek Sharma
- University of Kansas School of Medicine, Kansas City, Missouri; Kansas City Veterans Affairs Medical Center, Kansas City, Missouri
| | - Cesare Hassan
- Humanitas University, Department of Biomedical Sciences, Pieve Emanuele, Italy; Humanitas Clinical and Research Center-IRCCS, Endoscopy Unit, Rozzano, Italy.
| |
Collapse
|