1
|
Gao X, Lin J, Qu C, Wang C, Wu A, Zhu J, Xu C. Computer-aided diagnostic system with automated deep learning method based on the AutoGluon framework improved the diagnostic accuracy of early esophageal cancer. J Gastrointest Oncol 2024; 15:535-543. [PMID: 38756633 PMCID: PMC11094492 DOI: 10.21037/jgo-24-158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Accepted: 04/16/2024] [Indexed: 05/18/2024] Open
Abstract
Background There have been studies on the application of computer-aided diagnosis (CAD) in the endoscopic diagnosis of early esophageal cancer (EEC), but there is still a significant gap from clinical application. We developed an endoscopic CAD system for EEC based on the AutoGluon framework, aiming to explore the feasibility of automatic deep learning (DL) in clinical application. Methods The endoscopic pictures of normal esophagus, esophagitis, and EEC were collected from The First Affiliated Hospital of Soochow University (September 2015 to December 2021) and the Norwegian HyperKvasir database. All images of non-cancerous esophageal lesions and EEC in this study were pathologically examined. There were three tasks: task A was normal vs. lesion classification under non-magnifying endoscopy (n=932 vs. 1,092); task B was non-cancer lesion vs. EEC classification under non-magnifying endoscopy (n=594 vs. 429); and task C was non-cancer lesion vs. EEC classification under magnifying endoscopy (n=505 vs. 824). In all classification tasks, we took 100 pictures as the verification set, and the rest comprised as the training set. The CAD system was established based on the AutoGluon framework. Diagnostic performance of the model was compared with that of endoscopists grouped according to years of experience (senior >15 years; junior <5 years). Model evaluation indicators included accuracy, recall rate, precision, F1 value, interpretation time, and the area under the receiver operating characteristic (ROC) curve (AUC). Results In tasks A and B, the accuracies of medium-performance CAD and high-performance CAD were lower than those of junior doctors and senior doctors. In task C, the medium-performance and high-performance CAD accuracies were close to those of junior doctors and senior doctors. The high-performance CAD model outperformed the junior doctors in both task A (0.850 vs. 0.830) and task C (0.840 vs. 0.830) in sensitivity comparison, but there was still a large gap between high-performance CAD models and doctors in sensitivity comparison. In task A, with the aid of CAD pre-interpretation, the accuracy of junior and senior physicians were significantly improved (from 0.880 to 0.915 and from 0.920 to 0.945, respectively); the time spent on film reading was significantly shortened (junior: from 11.3 to 8.7 s; senior: from 6.7 to 5.5 s). In task C, with the aid of CAD pre-interpretation, the accuracy of junior and senior physicians were significantly improved (from 0.850 to 0.865 and from 0.915 to 0.935, respectively); the reading time was significantly shortened (junior: from 9.5 to 7.7 s; senior: from 5.6 to 3.0 s). Conclusions The CAD system based on the AutoGluon framework can assist doctors to improve the diagnostic accuracy and reading time of EEC under endoscopy. This study reveals that automatic DL methods are promising in clinical application.
Collapse
Affiliation(s)
- Xin Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Changju Qu
- Department of Hematology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Chao Wang
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Airong Wu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Chunfang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
| |
Collapse
|
2
|
Chempak Kumar A, Mubarak DMN. Ensembled CNN with artificial bee colony optimization method for esophageal cancer stage classification using SVM classifier. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:31-51. [PMID: 37980593 DOI: 10.3233/xst-230111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/21/2023]
Abstract
BACKGROUND Esophageal cancer (EC) is aggressive cancer with a high fatality rate and a rapid rise of the incidence globally. However, early diagnosis of EC remains a challenging task for clinicians. OBJECTIVE To help address and overcome this challenge, this study aims to develop and test a new computer-aided diagnosis (CAD) network that combines several machine learning models and optimization methods to detect EC and classify cancer stages. METHODS The study develops a new deep learning network for the classification of the various stages of EC and the premalignant stage, Barrett's Esophagus from endoscopic images. The proposed model uses a multi-convolution neural network (CNN) model combined with Xception, Mobilenetv2, GoogLeNet, and Darknet53 for feature extraction. The extracted features are blended and are then applied on to wrapper based Artificial Bee Colony (ABC) optimization technique to grade the most accurate and relevant attributes. A multi-class support vector machine (SVM) classifies the selected feature set into the various stages. A study dataset involving 523 Barrett's Esophagus images, 217 ESCC images and 288 EAC images is used to train the proposed network and test its classification performance. RESULTS The proposed network combining Xception, mobilenetv2, GoogLeNet, and Darknet53 outperforms all the existing methods with an overall classification accuracy of 97.76% using a 3-fold cross-validation method. CONCLUSION This study demonstrates that a new deep learning network that combines a multi-CNN model with ABC and a multi-SVM is more efficient than those with individual pre-trained networks for the EC analysis and stage classification.
Collapse
Affiliation(s)
- A Chempak Kumar
- Department of Computer Science, University of Kerala, Trivandrum, Kerala, India
| | | |
Collapse
|
3
|
Zhang JQ, Mi JJ, Wang R. Application of convolutional neural network-based endoscopic imaging in esophageal cancer or high-grade dysplasia: A systematic review and meta-analysis. World J Gastrointest Oncol 2023; 15:1998-2016. [DOI: 10.4251/wjgo.v15.i11.1998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/05/2023] [Accepted: 10/11/2023] [Indexed: 11/15/2023] Open
Abstract
BACKGROUND Esophageal cancer is the seventh-most common cancer type worldwide, accounting for 5% of death from malignancy. Development of novel diagnostic techniques has facilitated screening, early detection, and improved prognosis. Convolutional neural network (CNN)-based image analysis promises great potential for diagnosing and determining the prognosis of esophageal cancer, enabling even early detection of dysplasia.
AIM To conduct a meta-analysis of the diagnostic accuracy of CNN models for the diagnosis of esophageal cancer and high-grade dysplasia (HGD).
METHODS PubMed, EMBASE, Web of Science and Cochrane Library databases were searched for articles published up to November 30, 2022. We evaluated the diagnostic accuracy of using the CNN model with still image-based analysis and with video-based analysis for esophageal cancer or HGD, as well as for the invasion depth of esophageal cancer. The pooled sensitivity, pooled specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR) and area under the curve (AUC) were estimated, together with the 95% confidence intervals (CI). A bivariate method and hierarchical summary receiver operating characteristic method were used to calculate the diagnostic test accuracy of the CNN model. Meta-regression and subgroup analyses were used to identify sources of heterogeneity.
RESULTS A total of 28 studies were included in this systematic review and meta-analysis. Using still image-based analysis for the diagnosis of esophageal cancer or HGD provided a pooled sensitivity of 0.95 (95%CI: 0.92-0.97), pooled specificity of 0.92 (0.89-0.94), PLR of 11.5 (8.3-16.0), NLR of 0.06 (0.04-0.09), DOR of 205 (115-365), and AUC of 0.98 (0.96-0.99). When video-based analysis was used, a pooled sensitivity of 0.85 (0.77-0.91), pooled specificity of 0.73 (0.59-0.83), PLR of 3.1 (1.9-5.0), NLR of 0.20 (0.12-0.34), DOR of 15 (6-38) and AUC of 0.87 (0.84-0.90) were found. Prediction of invasion depth resulted in a pooled sensitivity of 0.90 (0.87-0.92), pooled specificity of 0.83 (95%CI: 0.76-0.88), PLR of 7.8 (1.9-32.0), NLR of 0.10 (0.41-0.25), DOR of 118 (11-1305), and AUC of 0.95 (0.92-0.96).
CONCLUSION CNN-based image analysis in diagnosing esophageal cancer and HGD is an excellent diagnostic method with high sensitivity and specificity that merits further investigation in large, multicenter clinical trials.
Collapse
Affiliation(s)
- Jun-Qi Zhang
- The Fifth Clinical Medical College, Shanxi Medical University, Taiyuan 030001, Shanxi Province, China
| | - Jun-Jie Mi
- Department of Gastroenterology, Shanxi Provincial People’s Hospital, Taiyuan 030012, Shanxi Province, China
| | - Rong Wang
- Department of Gastroenterology, The Fifth Hospital of Shanxi Medical University (Shanxi Provincial People’s Hospital), Taiyuan 030012, Shanxi Province, China
| |
Collapse
|
4
|
Cui R, Wang L, Lin L, Li J, Lu R, Liu S, Liu B, Gu Y, Zhang H, Shang Q, Chen L, Tian D. Deep Learning in Barrett's Esophagus Diagnosis: Current Status and Future Directions. Bioengineering (Basel) 2023; 10:1239. [PMID: 38002363 PMCID: PMC10669008 DOI: 10.3390/bioengineering10111239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/26/2023] Open
Abstract
Barrett's esophagus (BE) represents a pre-malignant condition characterized by abnormal cellular proliferation in the distal esophagus. A timely and accurate diagnosis of BE is imperative to prevent its progression to esophageal adenocarcinoma, a malignancy associated with a significantly reduced survival rate. In this digital age, deep learning (DL) has emerged as a powerful tool for medical image analysis and diagnostic applications, showcasing vast potential across various medical disciplines. In this comprehensive review, we meticulously assess 33 primary studies employing varied DL techniques, predominantly featuring convolutional neural networks (CNNs), for the diagnosis and understanding of BE. Our primary focus revolves around evaluating the current applications of DL in BE diagnosis, encompassing tasks such as image segmentation and classification, as well as their potential impact and implications in real-world clinical settings. While the applications of DL in BE diagnosis exhibit promising results, they are not without challenges, such as dataset issues and the "black box" nature of models. We discuss these challenges in the concluding section. Essentially, while DL holds tremendous potential to revolutionize BE diagnosis, addressing these challenges is paramount to harnessing its full capacity and ensuring its widespread application in clinical practice.
Collapse
Affiliation(s)
- Ruichen Cui
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Lei Wang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Lin Lin
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Jie Li
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Runda Lu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Shixiang Liu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Bowei Liu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Yimin Gu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Hanlu Zhang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Qixin Shang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Longqi Chen
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Dong Tian
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| |
Collapse
|
5
|
Mohan A, Asghar Z, Abid R, Subedi R, Kumari K, Kumar S, Majumder K, Bhurgri AI, Tejwaney U, Kumar S. Revolutionizing healthcare by use of artificial intelligence in esophageal carcinoma - a narrative review. Ann Med Surg (Lond) 2023; 85:4920-4927. [PMID: 37811030 PMCID: PMC10553069 DOI: 10.1097/ms9.0000000000001175] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 08/05/2023] [Indexed: 10/10/2023] Open
Abstract
Esophageal cancer is a major cause of cancer-related mortality worldwide, with significant regional disparities. Early detection of precursor lesions is essential to improve patient outcomes. Artificial intelligence (AI) techniques, including deep learning and machine learning, have proved to be of assistance to both gastroenterologists and pathologists in the diagnosis and characterization of upper gastrointestinal malignancies by correlating with the histopathology. The primary diagnostic method in gastroenterology is white light endoscopic evaluation, but conventional endoscopy is partially inefficient in detecting esophageal cancer. However, other endoscopic modalities, such as narrow-band imaging, endocytoscopy, and endomicroscopy, have shown improved visualization of mucosal structures and vasculature, which provides a set of baseline data to develop efficient AI-assisted predictive models for quick interpretation. The main challenges in managing esophageal cancer are identifying high-risk patients and the disease's poor prognosis. Thus, AI techniques can play a vital role in improving the early detection and diagnosis of precursor lesions, assisting gastroenterologists in performing targeted biopsies and real-time decisions of endoscopic mucosal resection or endoscopic submucosal dissection. Combining AI techniques and endoscopic modalities can enhance the diagnosis and management of esophageal cancer, improving patient outcomes and reducing cancer-related mortality rates. The aim of this review is to grasp a better understanding of the application of AI in the diagnosis, treatment, and prognosis of esophageal cancer and how computer-aided diagnosis and computer-aided detection can act as vital tools for clinicians in the long run.
Collapse
Affiliation(s)
| | | | - Rabia Abid
- Liaquat College of Medicine and Dentistry
| | - Rasish Subedi
- Universal College of Medical Sciences, Siddharthanagar, Nepal
| | | | | | | | - Aqsa I. Bhurgri
- Shaheed Muhtarma Benazir Bhutto Medical University, Larkana, Pakistan
| | | | - Sarwan Kumar
- Department of Medicine, Chittagong Medical College, Chittagong, Bangladesh
- Wayne State University, Michigan, USA
| |
Collapse
|
6
|
Hosseini F, Asadi F, Emami H, Ebnali M. Machine learning applications for early detection of esophageal cancer: a systematic review. BMC Med Inform Decis Mak 2023; 23:124. [PMID: 37460991 DOI: 10.1186/s12911-023-02235-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 07/12/2023] [Indexed: 07/20/2023] Open
Abstract
INTRODUCTION Esophageal cancer (EC) is a significant global health problem, with an estimated 7th highest incidence and 6th highest mortality rate. Timely diagnosis and treatment are critical for improving patients' outcomes, as over 40% of patients with EC are diagnosed after metastasis. Recent advances in machine learning (ML) techniques, particularly in computer vision, have demonstrated promising applications in medical image processing, assisting clinicians in making more accurate and faster diagnostic decisions. Given the significance of early detection of EC, this systematic review aims to summarize and discuss the current state of research on ML-based methods for the early detection of EC. METHODS We conducted a comprehensive systematic search of five databases (PubMed, Scopus, Web of Science, Wiley, and IEEE) using search terms such as "ML", "Deep Learning (DL (", "Neural Networks (NN)", "Esophagus", "EC" and "Early Detection". After applying inclusion and exclusion criteria, 31 articles were retained for full review. RESULTS The results of this review highlight the potential of ML-based methods in the early detection of EC. The average accuracy of the reviewed methods in the analysis of endoscopic and computed tomography (CT (images of the esophagus was over 89%, indicating a high impact on early detection of EC. Additionally, the highest percentage of clinical images used in the early detection of EC with the use of ML was related to white light imaging (WLI) images. Among all ML techniques, methods based on convolutional neural networks (CNN) achieved higher accuracy and sensitivity in the early detection of EC compared to other methods. CONCLUSION Our findings suggest that ML methods may improve accuracy in the early detection of EC, potentially supporting radiologists, endoscopists, and pathologists in diagnosis and treatment planning. However, the current literature is limited, and more studies are needed to investigate the clinical applications of these methods in early detection of EC. Furthermore, many studies suffer from class imbalance and biases, highlighting the need for validation of detection algorithms across organizations in longitudinal studies.
Collapse
Affiliation(s)
- Farhang Hosseini
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Farkhondeh Asadi
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Hassan Emami
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mahdi Ebnali
- Department of Emergency Medicine, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
7
|
Li M, Chen C, Cao Y, Zhou P, Deng X, Liu P, Wang Y, Lv X, Chen C. CIABNet: Category imbalance attention block network for the classification of multi-differentiated types of esophageal cancer. Med Phys 2023; 50:1507-1527. [PMID: 36272103 DOI: 10.1002/mp.16067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 08/25/2022] [Accepted: 09/09/2022] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND Esophageal cancer has become one of the important cancers that seriously threaten human life and health, and its incidence and mortality rate are still among the top malignant tumors. Histopathological image analysis is the gold standard for diagnosing different differentiation types of esophageal cancer. PURPOSE The grading accuracy and interpretability of the auxiliary diagnostic model for esophageal cancer are seriously affected by small interclass differences, imbalanced data distribution, and poor model interpretability. Therefore, we focused on developing the category imbalance attention block network (CIABNet) model to try to solve the previous problems. METHODS First, the quantitative metrics and model visualization results are integrated to transfer knowledge from the source domain images to better identify the regions of interest (ROI) in the target domain of esophageal cancer. Second, in order to pay attention to the subtle interclass differences, we propose the concatenate fusion attention block, which can focus on the contextual local feature relationships and the changes of channel attention weights among different regions simultaneously. Third, we proposed a category imbalance attention module, which treats each esophageal cancer differentiation class fairly based on aggregating different intensity information at multiple scales and explores more representative regional features for each class, which effectively mitigates the negative impact of category imbalance. Finally, we use feature map visualization to focus on interpreting whether the ROIs are the same or similar between the model and pathologists, thus better improving the interpretability of the model. RESULTS The experimental results show that the CIABNet model outperforms other state-of-the-art models, which achieves the most advanced results in classifying the differentiation types of esophageal cancer with an average classification accuracy of 92.24%, an average precision of 93.52%, an average recall of 90.31%, an average F1 value of 91.73%, and an average AUC value of 97.43%. In addition, the CIABNet model has essentially similar or identical to the ROI of pathologists in identifying histopathological images of esophageal cancer. CONCLUSIONS Our experimental results prove that our proposed computer-aided diagnostic algorithm shows great potential in histopathological images of multi-differentiated types of esophageal cancer.
Collapse
Affiliation(s)
- Min Li
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, China
| | - Chen Chen
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
- Xinjiang Cloud Computing Application Laboratory, Karamay, China
| | - Yanzhen Cao
- Department of Pathology, The Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, China
| | - Panyun Zhou
- College of Software, Xinjiang University, Urumqi, China
| | - Xin Deng
- College of Software, Xinjiang University, Urumqi, China
| | - Pei Liu
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Yunling Wang
- The First Affiliated Hospital of Xinjiang Medical University, Urumqi, China
| | - Xiaoyi Lv
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, China
- Xinjiang Cloud Computing Application Laboratory, Karamay, China
- College of Software, Xinjiang University, Urumqi, China
- Key Laboratory of software engineering technology, Xinjiang University, Urumqi, China
| | - Cheng Chen
- College of Software, Xinjiang University, Urumqi, China
| |
Collapse
|
8
|
Zhang S, Mu W, Dong D, Wei J, Fang M, Shao L, Zhou Y, He B, Zhang S, Liu Z, Liu J, Tian J. The Applications of Artificial Intelligence in Digestive System Neoplasms: A Review. HEALTH DATA SCIENCE 2023; 3:0005. [PMID: 38487199 PMCID: PMC10877701 DOI: 10.34133/hds.0005] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 12/05/2022] [Indexed: 03/17/2024]
Abstract
Importance Digestive system neoplasms (DSNs) are the leading cause of cancer-related mortality with a 5-year survival rate of less than 20%. Subjective evaluation of medical images including endoscopic images, whole slide images, computed tomography images, and magnetic resonance images plays a vital role in the clinical practice of DSNs, but with limited performance and increased workload of radiologists or pathologists. The application of artificial intelligence (AI) in medical image analysis holds promise to augment the visual interpretation of medical images, which could not only automate the complicated evaluation process but also convert medical images into quantitative imaging features that associated with tumor heterogeneity. Highlights We briefly introduce the methodology of AI for medical image analysis and then review its clinical applications including clinical auxiliary diagnosis, assessment of treatment response, and prognosis prediction on 4 typical DSNs including esophageal cancer, gastric cancer, colorectal cancer, and hepatocellular carcinoma. Conclusion AI technology has great potential in supporting the clinical diagnosis and treatment decision-making of DSNs. Several technical issues should be overcome before its application into clinical practice of DSNs.
Collapse
Affiliation(s)
- Shuaitong Zhang
- School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Wei Mu
- School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jingwei Wei
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Mengjie Fang
- School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Lizhi Shao
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yu Zhou
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Bingxi He
- School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Song Zhang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jianhua Liu
- Department of Oncology, Guangdong Provincial People's Hospital/Second Clinical Medical College of Southern Medical University/Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, China
| | - Jie Tian
- School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
9
|
Development and Validation of Deep Learning Models for the Multiclassification of Reflux Esophagitis Based on the Los Angeles Classification. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:7023731. [PMID: 36852218 PMCID: PMC9966565 DOI: 10.1155/2023/7023731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/16/2022] [Accepted: 02/06/2023] [Indexed: 02/20/2023]
Abstract
This study is to evaluate the feasibility of deep learning (DL) models in the multiclassification of reflux esophagitis (RE) endoscopic images, according to the Los Angeles (LA) classification for the first time. The images were divided into three groups, namely, normal, LA classification A + B, and LA C + D. The images from the HyperKvasir dataset and Suzhou hospital were divided into the training and validation datasets as a ratio of 4 : 1, while the images from Jintan hospital were the independent test set. The CNNs- or Transformer-architectures models (MobileNet, ResNet, Xception, EfficientNet, ViT, and ConvMixer) were transfer learning via Keras. The visualization of the models was proposed using Gradient-weighted Class Activation Mapping (Grad-CAM). Both in the validation set and the test set, the EfficientNet model showed the best performance as follows: accuracy (0.962 and 0.957), recall for LA A + B (0.970 and 0.925) and LA C + D (0.922 and 0.930), Marco-recall (0.946 and 0.928), Matthew's correlation coefficient (0.936 and 0.884), and Cohen's kappa (0.910 and 0.850), which was better than the other models and the endoscopists. According to the EfficientNet model, the Grad-CAM was plotted and highlighted the target lesions on the original images. This study developed a series of DL-based computer vision models with the interpretable Grad-CAM to evaluate the feasibility in the multiclassification of RE endoscopic images. It firstly suggests that DL-based classifiers show promise in the endoscopic diagnosis of esophagitis.
Collapse
|
10
|
Galati JS, Duve RJ, O'Mara M, Gross SA. Artificial intelligence in gastroenterology: A narrative review. Artif Intell Gastroenterol 2022; 3:117-141. [DOI: 10.35712/aig.v3.i5.117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/21/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Artificial intelligence (AI) is a complex concept, broadly defined in medicine as the development of computer systems to perform tasks that require human intelligence. It has the capacity to revolutionize medicine by increasing efficiency, expediting data and image analysis and identifying patterns, trends and associations in large datasets. Within gastroenterology, recent research efforts have focused on using AI in esophagogastroduodenoscopy, wireless capsule endoscopy (WCE) and colonoscopy to assist in diagnosis, disease monitoring, lesion detection and therapeutic intervention. The main objective of this narrative review is to provide a comprehensive overview of the research being performed within gastroenterology on AI in esophagogastroduodenoscopy, WCE and colonoscopy.
Collapse
Affiliation(s)
- Jonathan S Galati
- Department of Medicine, NYU Langone Health, New York, NY 10016, United States
| | - Robert J Duve
- Department of Internal Medicine, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY 14203, United States
| | - Matthew O'Mara
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| | - Seth A Gross
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| |
Collapse
|
11
|
Islam MM, Poly TN, Walther BA, Yeh CY, Seyed-Abdul S, Li YC(J, Lin MC. Deep Learning for the Diagnosis of Esophageal Cancer in Endoscopic Images: A Systematic Review and Meta-Analysis. Cancers (Basel) 2022; 14:cancers14235996. [PMID: 36497480 PMCID: PMC9736434 DOI: 10.3390/cancers14235996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/17/2022] [Accepted: 12/02/2022] [Indexed: 12/12/2022] Open
Abstract
Esophageal cancer, one of the most common cancers with a poor prognosis, is the sixth leading cause of cancer-related mortality worldwide. Early and accurate diagnosis of esophageal cancer, thus, plays a vital role in choosing the appropriate treatment plan for patients and increasing their survival rate. However, an accurate diagnosis of esophageal cancer requires substantial expertise and experience. Nowadays, the deep learning (DL) model for the diagnosis of esophageal cancer has shown promising performance. Therefore, we conducted an updated meta-analysis to determine the diagnostic accuracy of the DL model for the diagnosis of esophageal cancer. A search of PubMed, EMBASE, Scopus, and Web of Science, between 1 January 2012 and 1 August 2022, was conducted to identify potential studies evaluating the diagnostic performance of the DL model for esophageal cancer using endoscopic images. The study was performed in accordance with PRISMA guidelines. Two reviewers independently assessed potential studies for inclusion and extracted data from retrieved studies. Methodological quality was assessed by using the QUADAS-2 guidelines. The pooled accuracy, sensitivity, specificity, positive and negative predictive value, and the area under the receiver operating curve (AUROC) were calculated using a random effect model. A total of 28 potential studies involving a total of 703,006 images were included. The pooled accuracy, sensitivity, specificity, and positive and negative predictive value of DL for the diagnosis of esophageal cancer were 92.90%, 93.80%, 91.73%, 93.62%, and 91.97%, respectively. The pooled AUROC of DL for the diagnosis of esophageal cancer was 0.96. Furthermore, there was no publication bias among the studies. The findings of our study show that the DL model has great potential to accurately and quickly diagnose esophageal cancer. However, most studies developed their model using endoscopic data from the Asian population. Therefore, we recommend further validation through studies of other populations as well.
Collapse
Affiliation(s)
- Md. Mohaimenul Islam
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan
| | - Tahmina Nasrin Poly
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan
| | - Bruno Andreas Walther
- Deep Sea Ecology and Technology, Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung, Am Handelshafen 12, D-27570 Bremerhaven, Germany
| | - Chih-Yang Yeh
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan
| | - Shabbir Seyed-Abdul
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan
| | - Yu-Chuan (Jack) Li
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan
- Department of Dermatology, Wan Fang Hospital, Taipei 116, Taiwan
- TMU Research Center of Cancer Translational Medicine, Taipei Medical University, Taipei 110, Taiwan
| | - Ming-Chin Lin
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan
- Department of Neurosurgery, Shuang Ho Hospital, Taipei Medical University, New Taipei City 23561, Taiwan
- Taipei Neuroscience Institute, Taipei Medical University, Taipei 11031, Taiwan
- Correspondence:
| |
Collapse
|
12
|
Kim M, Park SK, Kubota Y, Lee S, Park K, Kong DS. Applying a deep convolutional neural network to monitor the lateral spread response during microvascular surgery for hemifacial spasm. PLoS One 2022; 17:e0276378. [PMID: 36322573 PMCID: PMC9629649 DOI: 10.1371/journal.pone.0276378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 10/06/2022] [Indexed: 01/24/2023] Open
Abstract
BACKGROUND Intraoperative neurophysiological monitoring is essential in neurosurgical procedures. In this study, we built and evaluated the performance of a deep neural network in differentiating between the presence and absence of a lateral spread response, which provides critical information during microvascular decompression surgery for the treatment of hemifacial spasm using intraoperatively acquired electromyography images. METHODS AND FINDINGS A total of 3,674 image screenshots of monitoring devices from 50 patients were prepared, preprocessed, and then adopted into training and validation sets. A deep neural network was constructed using current-standard, off-the-shelf tools. The neural network correctly differentiated 50 test images (accuracy, 100%; area under the curve, 0.96) collected from 25 patients whose data were never exposed to the neural network during training or validation. The accuracy of the network was equivalent to that of the neuromonitoring technologists (p = 0.3013) and higher than that of neurosurgeons experienced in hemifacial spasm (p < 0.0001). Heatmaps obtained to highlight the key region of interest achieved a level similar to that of trained human professionals. Provisional clinical application showed that the neural network was preferable as an auxiliary tool. CONCLUSIONS A deep neural network trained on a dataset of intraoperatively collected electromyography data could classify the presence and absence of the lateral spread response with equivalent performance to human professionals. Well-designated applications based upon the neural network may provide useful auxiliary tools for surgical teams during operations.
Collapse
Affiliation(s)
- Minsoo Kim
- Department of Neurosurgery, Gangneung Asan Hospital, Gangneung, Korea
- Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
- Department of Medicine, Graduate School, Yonsei University College of Medicine, Seoul, Korea
| | - Sang-Ku Park
- Department of Neurosurgery, Konkuk University Medical Center, Konkuk University School of Medicine, Seoul, Korea
| | | | - Seunghoon Lee
- Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Kwan Park
- Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
- Department of Neurosurgery, Konkuk University Medical Center, Konkuk University School of Medicine, Seoul, Korea
| | - Doo-Sik Kong
- Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
- * E-mail:
| |
Collapse
|
13
|
Tu JX, Lin XT, Ye HQ, Yang SL, Deng LF, Zhu RL, Wu L, Zhang XQ. Global research trends of artificial intelligence applied in esophageal carcinoma: A bibliometric analysis (2000-2022) via CiteSpace and VOSviewer. Front Oncol 2022; 12:972357. [PMID: 36091151 PMCID: PMC9453500 DOI: 10.3389/fonc.2022.972357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Accepted: 07/29/2022] [Indexed: 12/09/2022] Open
Abstract
ObjectiveUsing visual bibliometric analysis, the application and development of artificial intelligence in clinical esophageal cancer are summarized, and the research progress, hotspots, and emerging trends of artificial intelligence are elucidated.MethodsOn April 7th, 2022, articles and reviews regarding the application of AI in esophageal cancer, published between 2000 and 2022 were chosen from the Web of Science Core Collection. To conduct co-authorship, co-citation, and co-occurrence analysis of countries, institutions, authors, references, and keywords in this field, VOSviewer (version 1.6.18), CiteSpace (version 5.8.R3), Microsoft Excel 2019, R 4.2, an online bibliometric platform (http://bibliometric.com/) and an online browser plugin (https://www.altmetric.com/) were used.ResultsA total of 918 papers were included, with 23,490 citations. 5,979 authors, 39,962 co-cited authors, and 42,992 co-cited papers were identified in the study. Most publications were from China (317). In terms of the H-index (45) and citations (9925), the United States topped the list. The journal “New England Journal of Medicine” of Medicine, General & Internal (IF = 91.25) published the most studies on this topic. The University of Amsterdam had the largest number of publications among all institutions. The past 22 years of research can be broadly divided into two periods. The 2000 to 2016 research period focused on the classification, identification and comparison of esophageal cancer. Recently (2017-2022), the application of artificial intelligence lies in endoscopy, diagnosis, and precision therapy, which have become the frontiers of this field. It is expected that closely esophageal cancer clinical measures based on big data analysis and related to precision will become the research hotspot in the future.ConclusionsAn increasing number of scholars are devoted to artificial intelligence-related esophageal cancer research. The research field of artificial intelligence in esophageal cancer has entered a new stage. In the future, there is a need to continue to strengthen cooperation between countries and institutions. Improving the diagnostic accuracy of esophageal imaging, big data-based treatment and prognosis prediction through deep learning technology will be the continuing focus of research. The application of AI in esophageal cancer still has many challenges to overcome before it can be utilized.
Collapse
Affiliation(s)
- Jia-xin Tu
- School of Public Health, Nanchang University, Nanchang, China
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China
| | - Xue-ting Lin
- School of Public Health, Nanchang University, Nanchang, China
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China
| | - Hui-qing Ye
- School of Public Health, Nanchang University, Nanchang, China
| | - Shan-lan Yang
- School of Public Health, Nanchang University, Nanchang, China
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China
| | - Li-fang Deng
- School of Public Health, Nanchang University, Nanchang, China
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China
| | - Ruo-ling Zhu
- School of Public Health, Nanchang University, Nanchang, China
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China
| | - Lei Wu
- School of Public Health, Nanchang University, Nanchang, China
- Jiangxi Provincial Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, China
- *Correspondence: Lei Wu, ; Xiao-qiang Zhang,
| | - Xiao-qiang Zhang
- Department of Thoracic Surgery, The Second Affiliated Hospital of Nanchang University, Nanchang, China
- *Correspondence: Lei Wu, ; Xiao-qiang Zhang,
| |
Collapse
|
14
|
Ali H, Sharif M, Yasmin M, Rehmani MH. A shallow extraction of texture features for classification of abnormal video endoscopy frames. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
15
|
Zhou Z, Yu L, Tian S, Xing Y, Liu M, Xiao G, Wang J, Wang F. Local-global multiple perception based deep multi-modality learning for sub-type of esophageal cancer classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
16
|
Ay B, Turker C, Emre E, Ay K, Aydin G. Automated classification of nasal polyps in endoscopy video-frames using handcrafted and CNN features. Comput Biol Med 2022; 147:105725. [PMID: 35716434 DOI: 10.1016/j.compbiomed.2022.105725] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Revised: 06/08/2022] [Accepted: 06/08/2022] [Indexed: 11/03/2022]
Abstract
Nasal polyps are edematous polypoid masses covered by smooth, gray, shiny, soft and gelatinous mucosa. They often pose a threat for the patients to result in allergic rhinitis, sinus infections and asthma. The aim of this paper is to design a reliable rhinology assistance system for recognizing the nasal polyps in endoscopic videos. We introduce NP-80, a novel dataset that contains high-quality endoscopy video-frames of 80 participants with and without nasal polyps (NP). We benchmark vanilla machine learning and deep learning-based classifiers on the proposed dataset with respect to robustness and accuracy. We conduct a series of classification experiments and an exhaustive empirical comparison on handcrafted features (texture features -Local Binary Patterns (LBP) and shape features- Histogram of Oriented Gradients (HOG) and Convolutional Neural Network (CNN) features for recognizing nasal polyps automatically. The classification experiments are carried out by K-Nearest Neighbors (KNN), Support Vector Machines (SVM), Random Forest (RF), Decision Tree (DT) and CNN classifiers. The best obtained precision, recall, and accuracy rates are 99%, 98%, and 98.3%, respectively. The classifier methods built with handcrafted features have shown poor recognition performance (best accuracy of %96.3) from the proposed CNN classifier (best accuracy of %98.3). The empirical results of the proposed learning techniques on NP-80 dataset are promising to support clinical decision systems. We make our dataset publicly available to encourage further research on rhinology experiments. The major research objective accomplished in this study is the creation of a high-accuracy deep learning based nasal polyps classification model using easily obtainable portable rhino fiberoscope images to be integrated into an otolaryngologist decision support system. We conclude from the research that using appropriate image processing techniques along with suitable deep learning networks allow researchers to obtain high accuracy recommendations in identifying nasal polyps. Furthermore, the results from the study encourages us to develop deep learning models for various other medical conditions.
Collapse
Affiliation(s)
- Betul Ay
- Department of Computer Engineering, Firat University Faculty of Engineering, Elazig, Turkey.
| | - Cihan Turker
- Department of Otorhinolaryngology, Mus State Hospital, Mus, Turkey.
| | - Elif Emre
- Department of Anatomy, Firat University Faculty of Medicine, Elazig, Turkey.
| | - Kevser Ay
- Department of Internal Medical Sciences, Firat University Faculty of Medicine, Elazig, Turkey.
| | - Galip Aydin
- Department of Computer Engineering, Firat University Faculty of Engineering, Elazig, Turkey.
| |
Collapse
|
17
|
Zhou Y, Yuan X, Zhang X, Liu W, Wu Y, Yen GG, Hu B, Yi Z. Evolutionary Neural Architecture Search for Automatic Esophageal Lesion Identification and Segmentation. IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1109/tai.2021.3134600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Yao Zhou
- Center of Intelligent Medicine, College of Computer Science, Sichuan University, Chengdu, China
| | - Xianglei Yuan
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Xiaozhi Zhang
- Center of Intelligent Medicine, College of Computer Science, Sichuan University, Chengdu, China
| | - Wei Liu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Yu Wu
- Center of Intelligent Medicine, College of Computer Science, Sichuan University, Chengdu, China
| | - Gary G. Yen
- School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, USA
| | - Bing Hu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Zhang Yi
- Center of Intelligent Medicine, College of Computer Science, Sichuan University, Chengdu, China
| |
Collapse
|
18
|
Renna F, Martins M, Neto A, Cunha A, Libânio D, Dinis-Ribeiro M, Coimbra M. Artificial Intelligence for Upper Gastrointestinal Endoscopy: A Roadmap from Technology Development to Clinical Practice. Diagnostics (Basel) 2022; 12:diagnostics12051278. [PMID: 35626433 PMCID: PMC9141387 DOI: 10.3390/diagnostics12051278] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 05/14/2022] [Accepted: 05/18/2022] [Indexed: 02/05/2023] Open
Abstract
Stomach cancer is the third deadliest type of cancer in the world (0.86 million deaths in 2017). In 2035, a 20% increase will be observed both in incidence and mortality due to demographic effects if no interventions are foreseen. Upper GI endoscopy (UGIE) plays a paramount role in early diagnosis and, therefore, improved survival rates. On the other hand, human and technical factors can contribute to misdiagnosis while performing UGIE. In this scenario, artificial intelligence (AI) has recently shown its potential in compensating for the pitfalls of UGIE, by leveraging deep learning architectures able to efficiently recognize endoscopic patterns from UGIE video data. This work presents a review of the current state-of-the-art algorithms in the application of AI to gastroscopy. It focuses specifically on the threefold tasks of assuring exam completeness (i.e., detecting the presence of blind spots) and assisting in the detection and characterization of clinical findings, both gastric precancerous conditions and neoplastic lesion changes. Early and promising results have already been obtained using well-known deep learning architectures for computer vision, but many algorithmic challenges remain in achieving the vision of AI-assisted UGIE. Future challenges in the roadmap for the effective integration of AI tools within the UGIE clinical practice are discussed, namely the adoption of more robust deep learning architectures and methods able to embed domain knowledge into image/video classifiers as well as the availability of large, annotated datasets.
Collapse
Affiliation(s)
- Francesco Renna
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Faculdade de Ciências, Universidade do Porto, 4169-007 Porto, Portugal
- Correspondence:
| | - Miguel Martins
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Faculdade de Ciências, Universidade do Porto, 4169-007 Porto, Portugal
| | - Alexandre Neto
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Escola de Ciências e Tecnologia, Universidade de Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal
| | - António Cunha
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Escola de Ciências e Tecnologia, Universidade de Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal
| | - Diogo Libânio
- Departamento de Ciências da Informação e da Decisão em Saúde/Centro de Investigação em Tecnologias e Serviços de Saúde (CIDES/CINTESIS), Faculdade de Medicina, Universidade do Porto, 4200-319 Porto, Portugal; (D.L.); (M.D.-R.)
| | - Mário Dinis-Ribeiro
- Departamento de Ciências da Informação e da Decisão em Saúde/Centro de Investigação em Tecnologias e Serviços de Saúde (CIDES/CINTESIS), Faculdade de Medicina, Universidade do Porto, 4200-319 Porto, Portugal; (D.L.); (M.D.-R.)
| | - Miguel Coimbra
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Faculdade de Ciências, Universidade do Porto, 4169-007 Porto, Portugal
| |
Collapse
|
19
|
Minchenberg SB, Walradt T, Glissen Brown JR. Scoping out the future: The application of artificial intelligence to gastrointestinal endoscopy. World J Gastrointest Oncol 2022; 14:989-1001. [PMID: 35646286 PMCID: PMC9124983 DOI: 10.4251/wjgo.v14.i5.989] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 06/21/2021] [Accepted: 04/21/2022] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is a quickly expanding field in gastrointestinal endoscopy. Although there are a myriad of applications of AI ranging from identification of bleeding to predicting outcomes in patients with inflammatory bowel disease, a great deal of research has focused on the identification and classification of gastrointestinal malignancies. Several of the initial randomized, prospective trials utilizing AI in clinical medicine have centered on polyp detection during screening colonoscopy. In addition to work focused on colorectal cancer, AI systems have also been applied to gastric, esophageal, pancreatic, and liver cancers. Despite promising results in initial studies, the generalizability of most of these AI systems have not yet been evaluated. In this article we review recent developments in the field of AI applied to gastrointestinal oncology.
Collapse
Affiliation(s)
- Scott B Minchenberg
- Department of Internal Medicine, Beth Israel Deaconess Medical Center, Boston, MA 02130, United States
| | - Trent Walradt
- Department of Internal Medicine, Beth Israel Deaconess Medical Center, Boston, MA 02130, United States
| | - Jeremy R Glissen Brown
- Division of Gastroenterology, Beth Israel Deaconess Medical Center, Boston, MA 02130, United States
| |
Collapse
|
20
|
Abstract
Artificial intelligence (AI) is rapidly developing in various medical fields, and there is an increase in research performed in the field of gastrointestinal (GI) endoscopy. In particular, the advent of convolutional neural network, which is a class of deep learning method, has the potential to revolutionize the field of GI endoscopy, including esophagogastroduodenoscopy (EGD), capsule endoscopy (CE), and colonoscopy. A total of 149 original articles pertaining to AI (27 articles in esophagus, 30 articles in stomach, 29 articles in CE, and 63 articles in colon) were identified in this review. The main focuses of AI in EGD are cancer detection, identifying the depth of cancer invasion, prediction of pathological diagnosis, and prediction of Helicobacter pylori infection. In the field of CE, automated detection of bleeding sites, ulcers, tumors, and various small bowel diseases is being investigated. AI in colonoscopy has advanced with several patient-based prospective studies being conducted on the automated detection and classification of colon polyps. Furthermore, research on inflammatory bowel disease has also been recently reported. Most studies of AI in the field of GI endoscopy are still in the preclinical stages because of the retrospective design using still images. Video-based prospective studies are needed to advance the field. However, AI will continue to develop and be used in daily clinical practice in the near future. In this review, we have highlighted the published literature along with providing current status and insights into the future of AI in GI endoscopy.
Collapse
Affiliation(s)
- Yutaka Okagawa
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.,Department of Gastroenterology, Tonan Hospital, Sapporo, Japan
| | - Seiichiro Abe
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.
| | - Masayoshi Yamada
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Ichiro Oda
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| |
Collapse
|
21
|
Visaggi P, Barberio B, Gregori D, Azzolina D, Martinato M, Hassan C, Sharma P, Savarino E, de Bortoli N. Systematic review with meta-analysis: artificial intelligence in the diagnosis of oesophageal diseases. Aliment Pharmacol Ther 2022; 55:528-540. [PMID: 35098562 PMCID: PMC9305819 DOI: 10.1111/apt.16778] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 07/09/2022] [Accepted: 01/09/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Artificial intelligence (AI) has recently been applied to endoscopy and questionnaires for the evaluation of oesophageal diseases (ODs). AIM We performed a systematic review with meta-analysis to evaluate the performance of AI in the diagnosis of malignant and benign OD. METHODS We searched MEDLINE, EMBASE, EMBASE Classic and the Cochrane Library. A bivariate random-effect model was used to calculate pooled diagnostic efficacy of AI models and endoscopists. The reference tests were histology for neoplasms and the clinical and instrumental diagnosis for gastro-oesophageal reflux disease (GERD). The pooled area under the summary receiver operating characteristic (AUROC), sensitivity, specificity, positive and negative likelihood ratio (PLR and NLR) and diagnostic odds ratio (DOR) were estimated. RESULTS For the diagnosis of Barrett's neoplasia, AI had AUROC of 0.90, sensitivity 0.89, specificity 0.86, PLR 6.50, NLR 0.13 and DOR 50.53. AI models' performance was comparable with that of endoscopists (P = 0.35). For the diagnosis of oesophageal squamous cell carcinoma, the AUROC, sensitivity, specificity, PLR, NLR and DOR were 0.97, 0.95, 0.92, 12.65, 0.05 and DOR 258.36, respectively. In this task, AI performed better than endoscopists although without statistically significant differences. In the detection of abnormal intrapapillary capillary loops, the performance of AI was: AUROC 0.98, sensitivity 0.94, specificity 0.94, PLR 14.75, NLR 0.07 and DOR 225.83. For the diagnosis of GERD based on questionnaires, the AUROC, sensitivity, specificity, PLR, NLR and DOR were 0.99, 0.97, 0.97, 38.26, 0.03 and 1159.6, respectively. CONCLUSIONS AI demonstrated high performance in the clinical and endoscopic diagnosis of OD.
Collapse
Affiliation(s)
- Pierfrancesco Visaggi
- Gastroenterology UnitDepartment of Translational Research and New Technologies in Medicine and SurgeryUniversity of PisaPisaItaly
| | - Brigida Barberio
- Division of GastroenterologyDepartment of Surgery, Oncology and GastroenterologyUniversity of PadovaPadovaItaly
| | - Dario Gregori
- Unit of Biostatistics, Epidemiology and Public HealthDepartment of Cardiac, Thoracic, Vascular Sciences and Public HealthUniversity of PadovaPadovaItaly
| | - Danila Azzolina
- Unit of Biostatistics, Epidemiology and Public HealthDepartment of Cardiac, Thoracic, Vascular Sciences and Public HealthUniversity of PadovaPadovaItaly,Department of Medical ScienceUniversity of FerraraFerraraItaly
| | - Matteo Martinato
- Unit of Biostatistics, Epidemiology and Public HealthDepartment of Cardiac, Thoracic, Vascular Sciences and Public HealthUniversity of PadovaPadovaItaly
| | - Cesare Hassan
- Department of Biomedical Sciences, Humanitas UniversityVia Rita Levi Montalcini 420072 Pieve Emanuele, MilanItaly,IRCCS Humanitas Research Hospitalvia Manzoni 5620089 Rozzano, MilanItaly
| | - Prateek Sharma
- University of Kansas School of Medicine and VA Medical CenterKansas CityMissouriUSA
| | - Edoardo Savarino
- Division of GastroenterologyDepartment of Surgery, Oncology and GastroenterologyUniversity of PadovaPadovaItaly
| | - Nicola de Bortoli
- Gastroenterology UnitDepartment of Translational Research and New Technologies in Medicine and SurgeryUniversity of PisaPisaItaly
| |
Collapse
|
22
|
Tang S, Yu X, Cheang CF, Hu Z, Fang T, Choi IC, Yu HH. Diagnosis of Esophageal Lesions by Multi-Classification and Segmentation Using an Improved Multi-Task Deep Learning Model. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22041492. [PMID: 35214396 PMCID: PMC8876234 DOI: 10.3390/s22041492] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/26/2022] [Accepted: 02/08/2022] [Indexed: 05/03/2023]
Abstract
It is challenging for endoscopists to accurately detect esophageal lesions during gastrointestinal endoscopic screening due to visual similarities among different lesions in terms of shape, size, and texture among patients. Additionally, endoscopists are busy fighting esophageal lesions every day, hence the need to develop a computer-aided diagnostic tool to classify and segment the lesions at endoscopic images to reduce their burden. Therefore, we propose a multi-task classification and segmentation (MTCS) model, including the Esophageal Lesions Classification Network (ELCNet) and Esophageal Lesions Segmentation Network (ELSNet). The ELCNet was used to classify types of esophageal lesions, and the ELSNet was used to identify lesion regions. We created a dataset by collecting 805 esophageal images from 255 patients and 198 images from 64 patients to train and evaluate the MTCS model. Compared with other methods, the proposed not only achieved a high accuracy (93.43%) in classification but achieved a dice similarity coefficient (77.84%) in segmentation. In conclusion, the MTCS model can boost the performance of endoscopists in the detection of esophageal lesions as it can accurately multi-classify and segment the lesions and is a potential assistant for endoscopists to reduce the risk of oversight.
Collapse
Affiliation(s)
- Suigu Tang
- Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China; (S.T.); (X.Y.); (Z.H.); (T.F.)
| | - Xiaoyuan Yu
- Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China; (S.T.); (X.Y.); (Z.H.); (T.F.)
| | - Chak-Fong Cheang
- Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China; (S.T.); (X.Y.); (Z.H.); (T.F.)
- Correspondence:
| | - Zeming Hu
- Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China; (S.T.); (X.Y.); (Z.H.); (T.F.)
| | - Tong Fang
- Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China; (S.T.); (X.Y.); (Z.H.); (T.F.)
| | - I-Cheong Choi
- Kiang Wu Hospital, Macau 999078, China; (I.-C.C.); (H.-H.Y.)
| | - Hon-Ho Yu
- Kiang Wu Hospital, Macau 999078, China; (I.-C.C.); (H.-H.Y.)
| |
Collapse
|
23
|
Visaggi P, de Bortoli N, Barberio B, Savarino V, Oleas R, Rosi EM, Marchi S, Ribolsi M, Savarino E. Artificial Intelligence in the Diagnosis of Upper Gastrointestinal Diseases. J Clin Gastroenterol 2022; 56:23-35. [PMID: 34739406 PMCID: PMC9988236 DOI: 10.1097/mcg.0000000000001629] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Artificial intelligence (AI) has enormous potential to support clinical routine workflows and therefore is gaining increasing popularity among medical professionals. In the field of gastroenterology, investigations on AI and computer-aided diagnosis (CAD) systems have mainly focused on the lower gastrointestinal (GI) tract. However, numerous CAD tools have been tested also in upper GI disorders showing encouraging results. The main application of AI in the upper GI tract is endoscopy; however, the need to analyze increasing loads of numerical and categorical data in short times has pushed researchers to investigate applications of AI systems in other upper GI settings, including gastroesophageal reflux disease, eosinophilic esophagitis, and motility disorders. AI and CAD systems will be increasingly incorporated into daily clinical practice in the coming years, thus at least basic notions will be soon required among physicians. For noninsiders, the working principles and potential of AI may be as fascinating as obscure. Accordingly, we reviewed systematic reviews, meta-analyses, randomized controlled trials, and original research articles regarding the performance of AI in the diagnosis of both malignant and benign esophageal and gastric diseases, also discussing essential characteristics of AI.
Collapse
Affiliation(s)
- Pierfrancesco Visaggi
- Gastroenterology Unit, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa
| | - Nicola de Bortoli
- Gastroenterology Unit, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa
| | - Brigida Barberio
- Department of Surgery, Oncology, and Gastroenterology, Division of Gastroenterology, University of Padua, Padua
| | - Vincenzo Savarino
- Gastroenterology Unit, Department of Internal Medicine, University of Genoa, Genoa
| | - Roberto Oleas
- Ecuadorean Institute of Digestive Diseases, Guayaquil, Ecuador
| | - Emma M. Rosi
- Gastroenterology Unit, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa
| | - Santino Marchi
- Gastroenterology Unit, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa
| | - Mentore Ribolsi
- Department of Digestive Diseases, Campus Bio Medico University of Rome, Roma, Italy
| | - Edoardo Savarino
- Department of Surgery, Oncology, and Gastroenterology, Division of Gastroenterology, University of Padua, Padua
| |
Collapse
|
24
|
Pan W, Li X, Wang W, Zhou L, Wu J, Ren T, Liu C, Lv M, Su S, Tang Y. Identification of Barrett's esophagus in endoscopic images using deep learning. BMC Gastroenterol 2021; 21:479. [PMID: 34920705 PMCID: PMC8684213 DOI: 10.1186/s12876-021-02055-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 12/06/2021] [Indexed: 02/08/2023] Open
Abstract
BACKGROUND Development of a deep learning method to identify Barrett's esophagus (BE) scopes in endoscopic images. METHODS 443 endoscopic images from 187 patients of BE were included in this study. The gastroesophageal junction (GEJ) and squamous-columnar junction (SCJ) of BE were manually annotated in endoscopic images by experts. Fully convolutional neural networks (FCN) were developed to automatically identify the BE scopes in endoscopic images. The networks were trained and evaluated in two separate image sets. The performance of segmentation was evaluated by intersection over union (IOU). RESULTS The deep learning method was proved to be satisfying in the automated identification of BE in endoscopic images. The values of the IOU were 0.56 (GEJ) and 0.82 (SCJ), respectively. CONCLUSIONS Deep learning algorithm is promising with accuracies of concordance with manual human assessment in segmentation of the BE scope in endoscopic images. This automated recognition method helps clinicians to locate and recognize the scopes of BE in endoscopic examinations.
Collapse
Affiliation(s)
- Wen Pan
- Department of Digestion, West China Hospital of Sichuan University, Chengdu, 610054, Sichuan, China
- Department of Digestion, The Hospital of Chengdu Office of People's Government of Tibetan Autonomous Region, Ximianqiao Street No.20, Chengdu, 610054, Sichuan, China
| | - Xujia Li
- Department of General Surgery (Hepatobiliary Surgery), The Affiliated Hospital of Southwest Medical University, Taiping Street No.25, Luzhou, 646000, Sichuan, China
| | - Weijia Wang
- School of Information and Software Engineering, University of Electronic Science and Technology of China, 4 North Jianshe Road, Chengdu, 610054, Sichuan, China
| | - Linjing Zhou
- School of Information and Software Engineering, University of Electronic Science and Technology of China, 4 North Jianshe Road, Chengdu, 610054, Sichuan, China
| | - Jiali Wu
- Department of Anesthesiology, The Affiliated Hospital of Southwest Medical University, Taiping Street No.25, Luzhou, 646000, Sichuan, China
| | - Tao Ren
- Department of Digestion, The Hospital of Chengdu Office of People's Government of Tibetan Autonomous Region, Ximianqiao Street No.20, Chengdu, 610054, Sichuan, China
| | - Chao Liu
- Department of Digestion, The Hospital of Chengdu Office of People's Government of Tibetan Autonomous Region, Ximianqiao Street No.20, Chengdu, 610054, Sichuan, China.
| | - Muhan Lv
- Department of Digestion, The Affiliated Hospital of Southwest Medical University, Taiping Street No.25, Luzhou, 646000, Sichuan, China.
| | - Song Su
- Department of General Surgery (Hepatobiliary Surgery), The Affiliated Hospital of Southwest Medical University, Taiping Street No.25, Luzhou, 646000, Sichuan, China.
| | - Yong Tang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, 4 North Jianshe Road, Chengdu, 610054, Sichuan, China.
| |
Collapse
|
25
|
Su Z, Liang B, Shi F, Gelfond J, Šegalo S, Wang J, Jia P, Hao X. Deep learning-based facial image analysis in medical research: a systematic review protocol. BMJ Open 2021; 11:e047549. [PMID: 34764164 PMCID: PMC8587597 DOI: 10.1136/bmjopen-2020-047549] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
INTRODUCTION Deep learning techniques are gaining momentum in medical research. Evidence shows that deep learning has advantages over humans in image identification and classification, such as facial image analysis in detecting people's medical conditions. While positive findings are available, little is known about the state-of-the-art of deep learning-based facial image analysis in the medical context. For the consideration of patients' welfare and the development of the practice, a timely understanding of the challenges and opportunities faced by research on deep-learning-based facial image analysis is needed. To address this gap, we aim to conduct a systematic review to identify the characteristics and effects of deep learning-based facial image analysis in medical research. Insights gained from this systematic review will provide a much-needed understanding of the characteristics, challenges, as well as opportunities in deep learning-based facial image analysis applied in the contexts of disease detection, diagnosis and prognosis. METHODS Databases including PubMed, PsycINFO, CINAHL, IEEEXplore and Scopus will be searched for relevant studies published in English in September, 2021. Titles, abstracts and full-text articles will be screened to identify eligible articles. A manual search of the reference lists of the included articles will also be conducted. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework was adopted to guide the systematic review process. Two reviewers will independently examine the citations and select studies for inclusion. Discrepancies will be resolved by group discussions till a consensus is reached. Data will be extracted based on the research objective and selection criteria adopted in this study. ETHICS AND DISSEMINATION As the study is a protocol for a systematic review, ethical approval is not required. The study findings will be disseminated via peer-reviewed publications and conference presentations. PROSPERO REGISTRATION NUMBER CRD42020196473.
Collapse
Affiliation(s)
- Zhaohui Su
- Center on Smart and Connected Health Technologies, Mays Cancer Center, School of Nursing, UT Health San Antonio, San Antonio, Texas, USA
| | - Bin Liang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai, China
| | - J Gelfond
- Epidemiology and Biostatistics, University of Texas Health Science Center at San Antonio, San Antonio, Texas, UK
| | - Sabina Šegalo
- Department of Microbiology, University of Sarajevo, Sarajevo, Bosnia and Herzegovina
| | - Jing Wang
- College of Nursing, Florida State University, Tallahassee, Florida, USA
| | - Peng Jia
- Department of Land Surveying and Geo-Informatics, University of Twente, Enschede, Netherlands
- International Initiative on Spatial Lifecourse Epidemiology (ISLE), Enschede, UK
| | - Xiaoning Hao
- Division of Health Security Research, National Health Commission of the People's Republic of China, Beijing, Beijing, China
| |
Collapse
|
26
|
Li N, Jin SZ. Artificial intelligence and early esophageal cancer. Artif Intell Gastrointest Endosc 2021; 2:198-210. [DOI: 10.37126/aige.v2.i5.198] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 09/23/2021] [Accepted: 10/27/2021] [Indexed: 02/06/2023] Open
Abstract
The development of esophageal cancer (EC) from early to advanced stage results in a high mortality rate and poor prognosis. Advanced EC not only poses a serious threat to the life and health of patients but also places a heavy economic burden on their families and society. Endoscopy is of great value for the diagnosis of EC, especially in the screening of Barrett’s esophagus and early EC. However, at present, endoscopy has a low diagnostic rate for early tumors. In recent years, artificial intelligence (AI) has made remarkable progress in the diagnosis of digestive system tumors, providing a new model for clinicians to diagnose and treat these tumors. In this review, we aim to provide a comprehensive overview of how AI can help doctors diagnose early EC and precancerous lesions and make clinical decisions based on the predicted results. We analyze and summarize the recent research on AI and early EC. We find that based on deep learning (DL) and convolutional neural network methods, the current computer-aided diagnosis system has gradually developed from in vitro image analysis to real-time detection and diagnosis. Based on powerful computing and DL capabilities, the diagnostic accuracy of AI is close to or better than that of endoscopy specialists. We also analyze the shortcomings in the current AI research and corresponding improvement strategies. We believe that the application of AI-assisted endoscopy in the diagnosis of early EC and precancerous lesions will become possible after the further advancement of AI-related research.
Collapse
Affiliation(s)
- Ning Li
- Department of Gastroenterology and Hepatology, The Second Affiliated Hospital of Harbin Medical University, Harbin 150086, Heilongjiang Province, China
| | - Shi-Zhu Jin
- Department of Gastroenterology and Hepatology, The Second Affiliated Hospital of Harbin Medical University, Harbin 150086, Heilongjiang Province, China
| |
Collapse
|
27
|
Kröner PT, Engels MML, Glicksberg BS, Johnson KW, Mzaik O, van Hooft JE, Wallace MB, El-Serag HB, Krittanawong C. Artificial intelligence in gastroenterology: A state-of-the-art review. World J Gastroenterol 2021; 27:6794-6824. [PMID: 34790008 PMCID: PMC8567482 DOI: 10.3748/wjg.v27.i40.6794] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/15/2021] [Accepted: 09/16/2021] [Indexed: 02/06/2023] Open
Abstract
The development of artificial intelligence (AI) has increased dramatically in the last 20 years, with clinical applications progressively being explored for most of the medical specialties. The field of gastroenterology and hepatology, substantially reliant on vast amounts of imaging studies, is not an exception. The clinical applications of AI systems in this field include the identification of premalignant or malignant lesions (e.g., identification of dysplasia or esophageal adenocarcinoma in Barrett’s esophagus, pancreatic malignancies), detection of lesions (e.g., polyp identification and classification, small-bowel bleeding lesion on capsule endoscopy, pancreatic cystic lesions), development of objective scoring systems for risk stratification, predicting disease prognosis or treatment response [e.g., determining survival in patients post-resection of hepatocellular carcinoma), determining which patients with inflammatory bowel disease (IBD) will benefit from biologic therapy], or evaluation of metrics such as bowel preparation score or quality of endoscopic examination. The objective of this comprehensive review is to analyze the available AI-related studies pertaining to the entirety of the gastrointestinal tract, including the upper, middle and lower tracts; IBD; the hepatobiliary system; and the pancreas, discussing the findings and clinical applications, as well as outlining the current limitations and future directions in this field.
Collapse
Affiliation(s)
- Paul T Kröner
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Megan ML Engels
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Cancer Center Amsterdam, Department of Gastroenterology and Hepatology, Amsterdam UMC, Location AMC, Amsterdam 1105, The Netherlands
| | - Benjamin S Glicksberg
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Kipp W Johnson
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Obaie Mzaik
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Jeanin E van Hooft
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, Amsterdam 2300, The Netherlands
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Division of Gastroenterology and Hepatology, Sheikh Shakhbout Medical City, Abu Dhabi 11001, United Arab Emirates
| | - Hashem B El-Serag
- Section of Gastroenterology and Hepatology, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
| | - Chayakrit Krittanawong
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Cardiology, Michael E. DeBakey VA Medical Center, Houston, TX 77030, United States
| |
Collapse
|
28
|
|
29
|
Zhang SM, Wang YJ, Zhang ST. Accuracy of artificial intelligence-assisted detection of esophageal cancer and neoplasms on endoscopic images: A systematic review and meta-analysis. J Dig Dis 2021; 22:318-328. [PMID: 33871932 PMCID: PMC8361665 DOI: 10.1111/1751-2980.12992] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 03/02/2021] [Accepted: 04/15/2021] [Indexed: 12/14/2022]
Abstract
OBJECTIVE To investigate systematically previous studies on the accuracy of artificial intelligence (AI)-assisted diagnostic models in detecting esophageal neoplasms on endoscopic images so as to provide scientific evidence for the effectiveness of these models. METHODS A literature search was conducted on the PubMed, EMBASE and Cochrane Library databases for studies on the AI-assisted detection of esophageal neoplasms on endoscopic images published up to December 2020. A bivariate mixed-effects regression model was used to calculate the pooled diagnostic efficacy of AI-assisted system. Subgroup analyses and meta-regression analyses were performed to explore the sources of heterogeneity. The effectiveness of AI-assisted models was also compared with that of the endoscopists. RESULTS Sixteen studies were included in the systematic review and meta-analysis. The pooled sensitivity, specificity, positive and negative likelihood ratios, diagnostic odds ratio and area under the summary receiver operating characteristic curve regarding AI-assisted detection of esophageal neoplasms were 94% (95% confidence interval [CI] 92%-96%), 85% (95% CI 73%-92%), 6.40 (95% CI 3.38-12.11), 0.06 (95% CI 0.04-0.10), 98.88 (95% CI 39.45-247.87) and 0.97 (95% CI 0.95-0.98), respectively. AI-based models performed better than endoscopists in terms of the pooled sensitivity (94% [95% CI 84%-98%] vs 82% [95% CI 77%-86%, P < 0.01). CONCLUSIONS The use of AI results in increased accuracy in detecting early esophageal cancer. However, most of the included studies have a retrospective study design, thus further validation with prospective trials is required.
Collapse
Affiliation(s)
- Si Min Zhang
- Department of GastroenterologyBeijing Friendship Hospital, Capital Medical UniversityBeijingChina,National Clinical Research Center for Digestive DiseasesBeijingChina,Beijing Digestive Disease CenterBeijingChina
| | - Yong Jun Wang
- Department of GastroenterologyBeijing Friendship Hospital, Capital Medical UniversityBeijingChina,National Clinical Research Center for Digestive DiseasesBeijingChina,Beijing Digestive Disease CenterBeijingChina
| | - Shu Tian Zhang
- Department of GastroenterologyBeijing Friendship Hospital, Capital Medical UniversityBeijingChina,National Clinical Research Center for Digestive DiseasesBeijingChina,Beijing Digestive Disease CenterBeijingChina
| |
Collapse
|
30
|
Yan T, Wong PK, Qin YY. Deep learning for diagnosis of precancerous lesions in upper gastrointestinal endoscopy: A review. World J Gastroenterol 2021; 27:2531-2544. [PMID: 34092974 PMCID: PMC8160615 DOI: 10.3748/wjg.v27.i20.2531] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Revised: 03/27/2021] [Accepted: 04/09/2021] [Indexed: 02/06/2023] Open
Abstract
Upper gastrointestinal (GI) cancers are the leading cause of cancer-related deaths worldwide. Early identification of precancerous lesions has been shown to minimize the incidence of GI cancers and substantiate the vital role of screening endoscopy. However, unlike GI cancers, precancerous lesions in the upper GI tract can be subtle and difficult to detect. Artificial intelligence techniques, especially deep learning algorithms with convolutional neural networks, might help endoscopists identify the precancerous lesions and reduce interobserver variability. In this review, a systematic literature search was undertaken of the Web of Science, PubMed, Cochrane Library and Embase, with an emphasis on the deep learning-based diagnosis of precancerous lesions in the upper GI tract. The status of deep learning algorithms in upper GI precancerous lesions has been systematically summarized. The challenges and recommendations targeting this field are comprehensively analyzed for future research.
Collapse
Affiliation(s)
- Tao Yan
- School of Mechanical Engineering, Hubei University of Arts and Science, Xiangyang 441053, Hubei Province, China
- Department of Electromechanical Engineering, University of Macau, Taipa 999078, Macau, China
| | - Pak Kin Wong
- Department of Electromechanical Engineering, University of Macau, Taipa 999078, Macau, China
| | - Ye-Ying Qin
- Department of Electromechanical Engineering, University of Macau, Taipa 999078, Macau, China
| |
Collapse
|
31
|
Yu H, Singh R, Shin SH, Ho KY. Artificial intelligence in upper GI endoscopy - current status, challenges and future promise. J Gastroenterol Hepatol 2021; 36:20-24. [PMID: 33448515 DOI: 10.1111/jgh.15354] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Accepted: 11/16/2020] [Indexed: 12/12/2022]
Abstract
White-light endoscopy with biopsy is the current gold standard modality for detecting and diagnosing upper gastrointestinal (GI) pathology. However, missed lesions remain a challenge. To overcome interobserver variability and learning curve issues, artificial intelligence (AI) has recently been introduced to assist endoscopists in the detection and diagnosis of upper GI neoplasia. In contrast to AI in colonoscopy, current AI studies for upper GI endoscopy are smaller pilot studies. Researchers currently lack large volume, well-annotated, high-quality datasets in gastric cancer, dysplasia in Barrett's esophagus and early esophageal squamous cell cancer. This review will look at the latest studies of AI in upper GI endoscopy, discuss some of the challenges facing researchers, and predict what the future may hold in this rapidly changing field.
Collapse
Affiliation(s)
- Honggang Yu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Rajvinder Singh
- Department of Gastroenterology, Lyell McEwin Hospital, University of Adelaide, Adelaide, South Australia, Australia
| | - Seon Ho Shin
- Department of Gastroenterology, Lyell McEwin Hospital, University of Adelaide, Adelaide, South Australia, Australia
| | - Khek Yu Ho
- Department of Gastroenterology and Hepatology, National University Hospital, National University of Singapore, Singapore
| |
Collapse
|