1
|
Khosravi M, Jasemi SK, Hayati P, Javar HA, Izadi S, Izadi Z. Transformative artificial intelligence in gastric cancer: Advancements in diagnostic techniques. Comput Biol Med 2024; 183:109261. [PMID: 39488054 DOI: 10.1016/j.compbiomed.2024.109261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 09/30/2024] [Accepted: 10/07/2024] [Indexed: 11/04/2024]
Abstract
Gastric cancer represents a significant global health challenge with elevated incidence and mortality rates, highlighting the need for advancements in diagnostic and therapeutic strategies. This review paper addresses the critical need for a thorough synthesis of the role of artificial intelligence (AI) in the management of gastric cancer. It provides an in-depth analysis of current AI applications, focusing on their contributions to early diagnosis, treatment planning, and outcome prediction. The review identifies key gaps and limitations in the existing literature by examining recent studies and technological developments. It aims to clarify the evolution of AI-driven methods and their impact on enhancing diagnostic accuracy, personalizing treatment strategies, and improving patient outcomes. The paper emphasizes the transformative potential of AI in overcoming the challenges associated with gastric cancer management and proposes future research directions to further harness AI's capabilities. Through this synthesis, the review underscores the importance of integrating AI technologies into clinical practice to revolutionize gastric cancer management.
Collapse
Affiliation(s)
- Mobina Khosravi
- Student Research Committee, Kermanshah University of Medical Sciences, Kermanshah, Iran; USERN Office, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| | - Seyedeh Kimia Jasemi
- Student Research Committee, Kermanshah University of Medical Sciences, Kermanshah, Iran; USERN Office, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| | - Parsa Hayati
- Student Research Committee, Kermanshah University of Medical Sciences, Kermanshah, Iran; USERN Office, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| | - Hamid Akbari Javar
- Department of Pharmaceutics, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran; USERN Office, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| | - Saadat Izadi
- Department of Computer Engineering and Information Technology, Razi University, Kermanshah, Iran; USERN Office, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| | - Zhila Izadi
- Pharmaceutical Sciences Research Center, Health Institute, Kermanshah University of Medical Sciences, Kermanshah, Iran; USERN Office, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| |
Collapse
|
2
|
Kikuchi R, Okamoto K, Ozawa T, Shibata J, Ishihara S, Tada T. Endoscopic Artificial Intelligence for Image Analysis in Gastrointestinal Neoplasms. Digestion 2024; 105:419-435. [PMID: 39068926 DOI: 10.1159/000540251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 07/02/2024] [Indexed: 07/30/2024]
Abstract
BACKGROUND Artificial intelligence (AI) using deep learning systems has recently been utilized in various medical fields. In the field of gastroenterology, AI is primarily implemented in image recognition and utilized in the realm of gastrointestinal (GI) endoscopy. In GI endoscopy, computer-aided detection/diagnosis (CAD) systems assist endoscopists in GI neoplasm detection or differentiation of cancerous or noncancerous lesions. Several AI systems for colorectal polyps have already been applied in colonoscopy clinical practices. In esophagogastroduodenoscopy, a few CAD systems for upper GI neoplasms have been launched in Asian countries. The usefulness of these CAD systems in GI endoscopy has been gradually elucidated. SUMMARY In this review, we outline recent articles on several studies of endoscopic AI systems for GI neoplasms, focusing on esophageal squamous cell carcinoma (ESCC), esophageal adenocarcinoma (EAC), gastric cancer (GC), and colorectal polyps. In ESCC and EAC, computer-aided detection (CADe) systems were mainly developed, and a recent meta-analysis study showed sensitivities of 91.2% and 93.1% and specificities of 80% and 86.9%, respectively. In GC, a recent meta-analysis study on CADe systems demonstrated that their sensitivity and specificity were as high as 90%. A randomized controlled trial (RCT) also showed that the use of the CADe system reduced the miss rate. Regarding computer-aided diagnosis (CADx) systems for GC, although RCTs have not yet been conducted, most studies have demonstrated expert-level performance. In colorectal polyps, multiple RCTs have shown the usefulness of the CADe system for improving the polyp detection rate, and several CADx systems have been shown to have high accuracy in colorectal polyp differentiation. KEY MESSAGES Most analyses of endoscopic AI systems suggested that their performance was better than that of nonexpert endoscopists and equivalent to that of expert endoscopists. Thus, endoscopic AI systems may be useful for reducing the risk of overlooking lesions and improving the diagnostic ability of endoscopists.
Collapse
Affiliation(s)
- Ryosuke Kikuchi
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kazuaki Okamoto
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tsuyoshi Ozawa
- Tomohiro Tada the Institute of Gastroenterology and Proctology, Saitama, Japan
- AI Medical Service Inc., Tokyo, Japan
| | - Junichi Shibata
- Tomohiro Tada the Institute of Gastroenterology and Proctology, Saitama, Japan
- AI Medical Service Inc., Tokyo, Japan
| | - Soichiro Ishihara
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tomohiro Tada
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
- Tomohiro Tada the Institute of Gastroenterology and Proctology, Saitama, Japan
- AI Medical Service Inc., Tokyo, Japan
| |
Collapse
|
3
|
Xu J, Kuai Y, Chen Q, Wang X, Zhao Y, Sun B. Spatio-Temporal Feature Transformation Based Polyp Recognition for Automatic Detection: Higher Accuracy than Novice Endoscopists in Colorectal Polyp Detection and Diagnosis. Dig Dis Sci 2024; 69:911-921. [PMID: 38244123 PMCID: PMC10960915 DOI: 10.1007/s10620-024-08277-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 01/03/2024] [Indexed: 01/22/2024]
Abstract
BACKGROUND Artificial intelligence represents an emerging area with promising potential for improving colonoscopy quality. AIMS To develop a colon polyp detection model using STFT and evaluate its performance through a randomized sample experiment. METHODS Colonoscopy videos from the Digestive Endoscopy Center of the First Affiliated Hospital of Anhui Medical University, recorded between January 2018 and November 2022, were selected and divided into two datasets. To verify the model's practical application in clinical settings, 1500 colonoscopy images and 1200 polyp images of various sizes were randomly selected from the test set and compared with the STFT model's and endoscopists' recognition results with different years of experience. RESULTS In the randomized sample trial involving 1500 colonoscopy images, the STFT model demonstrated significantly higher accuracy and specificity compared to endoscopists with low years of experience (0.902 vs. 0.809, 0.898 vs. 0.826, respectively). Moreover, the model's sensitivity was 0.904, which was higher than that of endoscopists with low, medium, or high years of experience (0.80, 0.896, 0.895, respectively), with statistical significance (P < 0.05). In the randomized sample experiment of 1200 polyp images of different sizes, the accuracy of the STFT model was significantly higher than that of endoscopists with low years of experience when the polyp size was ≤ 0.5 cm and 0.6-1.0 cm (0.902 vs. 0.70, 0.953 vs. 0.865, respectively). CONCLUSIONS The STFT-based colon polyp detection model exhibits high accuracy in detecting polyps in colonoscopy videos, with a particular efficiency in detecting small polyps (≤ 0.5 cm)(0.902 vs. 0.70, P < 0.001).
Collapse
Affiliation(s)
- Jianhua Xu
- Anhui Medical University, Hefei, Anhui, 230032, China
- The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China
| | - Yaxian Kuai
- Anhui Medical University, Hefei, Anhui, 230032, China
- The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China
| | - Qianqian Chen
- Anhui Medical University, Hefei, Anhui, 230032, China
- The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China
| | - Xu Wang
- The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China
- Anhui Provincial Key Laboratory of Digestive Disease, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China
| | - Yihang Zhao
- Anhui Medical University, Hefei, Anhui, 230032, China
- The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China
| | - Bin Sun
- The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China.
- Anhui Provincial Key Laboratory of Digestive Disease, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China.
- Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Jixi Road 218, Hefei, Anhui, 230022, China.
| |
Collapse
|
4
|
Klang E, Sourosh A, Nadkarni GN, Sharif K, Lahat A. Deep Learning and Gastric Cancer: Systematic Review of AI-Assisted Endoscopy. Diagnostics (Basel) 2023; 13:3613. [PMID: 38132197 PMCID: PMC10742887 DOI: 10.3390/diagnostics13243613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/23/2023] [Accepted: 12/02/2023] [Indexed: 12/23/2023] Open
Abstract
BACKGROUND Gastric cancer (GC), a significant health burden worldwide, is typically diagnosed in the advanced stages due to its non-specific symptoms and complex morphological features. Deep learning (DL) has shown potential for improving and standardizing early GC detection. This systematic review aims to evaluate the current status of DL in pre-malignant, early-stage, and gastric neoplasia analysis. METHODS A comprehensive literature search was conducted in PubMed/MEDLINE for original studies implementing DL algorithms for gastric neoplasia detection using endoscopic images. We adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The focus was on studies providing quantitative diagnostic performance measures and those comparing AI performance with human endoscopists. RESULTS Our review encompasses 42 studies that utilize a variety of DL techniques. The findings demonstrate the utility of DL in GC classification, detection, tumor invasion depth assessment, cancer margin delineation, lesion segmentation, and detection of early-stage and pre-malignant lesions. Notably, DL models frequently matched or outperformed human endoscopists in diagnostic accuracy. However, heterogeneity in DL algorithms, imaging techniques, and study designs precluded a definitive conclusion about the best algorithmic approach. CONCLUSIONS The promise of artificial intelligence in improving and standardizing gastric neoplasia detection, diagnosis, and segmentation is significant. This review is limited by predominantly single-center studies and undisclosed datasets used in AI training, impacting generalizability and demographic representation. Further, retrospective algorithm training may not reflect actual clinical performance, and a lack of model details hinders replication efforts. More research is needed to substantiate these findings, including larger-scale multi-center studies, prospective clinical trials, and comprehensive technical reporting of DL algorithms and datasets, particularly regarding the heterogeneity in DL algorithms and study designs.
Collapse
Affiliation(s)
- Eyal Klang
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
- ARC Innovation Center, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel
| | - Ali Sourosh
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Girish N. Nadkarni
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Kassem Sharif
- Department of Gastroenterology, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel;
| | - Adi Lahat
- Department of Gastroenterology, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel;
| |
Collapse
|
5
|
Su X, Liu Q, Gao X, Ma L. Evaluation of deep learning methods for early gastric cancer detection using gastroscopic images. Technol Health Care 2023; 31:313-322. [PMID: 37066932 DOI: 10.3233/thc-236027] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/18/2023]
Abstract
BACKGROUND A timely diagnosis of early gastric cancer (EGC) can greatly reduce the death rate of patients. However, the manual detection of EGC is a costly and low-accuracy task. The artificial intelligence (AI) method based on deep learning is considered as a potential method to detect EGC. AI methods have outperformed endoscopists in EGC detection, especially with the use of the different region convolutional neural network (RCNN) models recently reported. However, no studies compared the performances of different RCNN series models. OBJECTIVE This study aimed to compare the performances of different RCNN series models for EGC. METHODS Three typical RCNN models were used to detect gastric cancer using 3659 gastroscopic images, including 1434 images of EGC: Faster RCNN, Cascade RCNN, and Mask RCNN. RESULTS The models were evaluated in terms of specificity, accuracy, precision, recall, and AP. Fast RCNN, Cascade RCNN, and Mask RCNN had similar accuracy (0.935, 0.938, and 0.935). The specificity of Cascade RCNN was 0.946, which was slightly higher than 0.908 for Faster RCNN and 0.908 for Mask RCNN. CONCLUSION Faster RCNN and Mask RCNN place more emphasis on positive detection, and Cascade RCNN places more emphasis on negative detection. These methods based on deep learning were conducive to helping in early cancer diagnosis using endoscopic images.
Collapse
Affiliation(s)
- Xiufeng Su
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Qingshan Liu
- School of Information Science and Engineering, Harbin Institute of Technology, Weihai, Shandong, China
| | - Xiaozhong Gao
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Liyong Ma
- School of Information Science and Engineering, Harbin Institute of Technology, Weihai, Shandong, China
| |
Collapse
|
6
|
Choi SJ, Kim DK, Kim BS, Cho M, Jeong J, Jo YH, Song KJ, Kim YJ, Kim S. Mask R-CNN based multiclass segmentation model for endotracheal intubation using video laryngoscope. Digit Health 2023; 9:20552076231211547. [PMID: 38025115 PMCID: PMC10631336 DOI: 10.1177/20552076231211547] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 10/17/2023] [Indexed: 12/01/2023] Open
Abstract
Objective Endotracheal intubation (ETI) is critical to secure the airway in emergent situations. Although artificial intelligence algorithms are frequently used to analyze medical images, their application to evaluating intraoral structures based on images captured during emergent ETI remains limited. The aim of this study is to develop an artificial intelligence model for segmenting structures in the oral cavity using video laryngoscope (VL) images. Methods From 54 VL videos, clinicians manually labeled images that include motion blur, foggy vision, blood, mucus, and vomitus. Anatomical structures of interest included the tongue, epiglottis, vocal cord, and corniculate cartilage. EfficientNet-B5 with DeepLabv3+, EffecientNet-B5 with U-Net, and Configured Mask R-Convolution Neural Network (CNN) were used; EffecientNet-B5 was pretrained on ImageNet. Dice similarity coefficient (DSC) was used to measure the segmentation performance of the model. Accuracy, recall, specificity, and F1 score were used to evaluate the model's performance in targeting the structure from the value of the intersection over union between the ground truth and prediction mask. Results The DSC of tongue, epiglottis, vocal cord, and corniculate cartilage obtained from the EfficientNet-B5 with DeepLabv3+, EfficientNet-B5 with U-Net, and Configured Mask R-CNN model were 0.3351/0.7675/0.766/0.6539, 0.0/0.7581/0.7395/0.6906, and 0.1167/0.7677/0.7207/0.57, respectively. Furthermore, the processing speeds (frames per second) of the three models stood at 3, 24, and 32, respectively. Conclusions The algorithm developed in this study can assist medical providers performing ETI in emergent situations.
Collapse
Affiliation(s)
- Seung Jae Choi
- Transdisciplinary Department of Medicine and Advanced Technology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Dae Kon Kim
- Department of Emergency Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
- Department of Emergency Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Byeong Soo Kim
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, Republic of Korea
| | - Minwoo Cho
- Transdisciplinary Department of Medicine and Advanced Technology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Joo Jeong
- Department of Emergency Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
- Department of Emergency Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - You Hwan Jo
- Department of Emergency Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
- Department of Emergency Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Kyoung Jun Song
- Department of Emergency Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Emergency Medicine, Seoul Metropolitan Government-Seoul National University Boramae Medical Center, Seoul, Republic of Korea
| | - Yu Jin Kim
- Department of Emergency Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
- Department of Emergency Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Sungwan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, Republic of Korea
- Institute of Bioengineering, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|