1
|
Nathani P, Sharma P. Role of Artificial Intelligence in the Detection and Management of Premalignant and Malignant Lesions of the Esophagus and Stomach. Gastrointest Endosc Clin N Am 2025; 35:319-353. [PMID: 40021232 DOI: 10.1016/j.giec.2024.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2025]
Abstract
The advent of artificial intelligence (AI) and deep learning algorithms, particularly convolutional neural networks, promises to address pitfalls, bridging the care for patients at high risk with improved detection (computer-aided detection [CADe]) and characterization (computer-aided diagnosis [CADx]) of lesions. This review describes the available artificial intelligence (AI) technology and the current data on AI tools for screening esophageal squamous cell cancer, Barret's esophagus-related neoplasia, and gastric cancer. These tools outperformed endoscopists in many situations. Recent randomized controlled trials have demonstrated the successful application of AI tools in clinical practice with improved outcomes.
Collapse
Affiliation(s)
- Piyush Nathani
- Department of Gastroenterology, University of Kansas School of Medicine, Kansas City, KS, USA.
| | - Prateek Sharma
- Department of Gastroenterology, University of Kansas School of Medicine, Kansas City, KS, USA; Kansas City Veteran Affairs Medical Center, Kansas City, MO, USA
| |
Collapse
|
2
|
Albuquerque C, Henriques R, Castelli M. Deep learning-based object detection algorithms in medical imaging: Systematic review. Heliyon 2025; 11:e41137. [PMID: 39758372 PMCID: PMC11699422 DOI: 10.1016/j.heliyon.2024.e41137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 12/04/2024] [Accepted: 12/10/2024] [Indexed: 01/06/2025] Open
Abstract
Over the past decade, Deep Learning (DL) techniques have demonstrated remarkable advancements across various domains, driving their widespread adoption. Particularly in medical image analysis, DL received greater attention for tasks like image segmentation, object detection, and classification. This paper provides an overview of DL-based object recognition in medical images, exploring recent methods and emphasizing different imaging techniques and anatomical applications. Utilizing a meticulous quantitative and qualitative analysis following PRISMA guidelines, we examined publications based on citation rates to explore into the utilization of DL-based object detectors across imaging modalities and anatomical domains. Our findings reveal a consistent rise in the utilization of DL-based object detection models, indicating unexploited potential in medical image analysis. Predominantly within Medicine and Computer Science domains, research in this area is most active in the US, China, and Japan. Notably, DL-based object detection methods have gotten significant interest across diverse medical imaging modalities and anatomical domains. These methods have been applied to a range of techniques including CR scans, pathology images, and endoscopic imaging, showcasing their adaptability. Moreover, diverse anatomical applications, particularly in digital pathology and microscopy, have been explored. The analysis underscores the presence of varied datasets, often with significant discrepancies in size, with a notable percentage being labeled as private or internal, and with prospective studies in this field remaining scarce. Our review of existing trends in DL-based object detection in medical images offers insights for future research directions. The continuous evolution of DL algorithms highlighted in the literature underscores the dynamic nature of this field, emphasizing the need for ongoing research and fitted optimization for specific applications.
Collapse
|
3
|
Li S, Xu M, Meng Y, Sun H, Zhang T, Yang H, Li Y, Ma X. The application of the combination between artificial intelligence and endoscopy in gastrointestinal tumors. MEDCOMM – ONCOLOGY 2024; 3. [DOI: 10.1002/mog2.91] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 09/03/2024] [Indexed: 01/04/2025]
Abstract
AbstractGastrointestinal (GI) tumors have always been a major type of malignant tumor and a leading cause of tumor‐related deaths worldwide. The main principles of modern medicine for GI tumors are early prevention, early diagnosis, and early treatment, with early diagnosis being the most effective measure. Endoscopy, due to its ability to visualize lesions, has been one of the primary modalities for screening, diagnosing, and treating GI tumors. However, a qualified endoscopist often requires long training and extensive experience, which to some extent limits the wider use of endoscopy. With advances in data science, artificial intelligence (AI) has brought a new development direction for the endoscopy of GI tumors. AI can quickly process large quantities of data and images and improve diagnostic accuracy with some training, greatly reducing the workload of endoscopists and assisting them in early diagnosis. Therefore, this review focuses on the combined application of endoscopy and AI in GI tumors in recent years, describing the latest research progress on the main types of tumors and their performance in clinical trials, the application of multimodal AI in endoscopy, the development of endoscopy, and the potential applications of AI within it, with the aim of providing a reference for subsequent research.
Collapse
Affiliation(s)
- Shen Li
- Department of Biotherapy Cancer Center, West China Hospital, West China Medical School Sichuan University Chengdu China
| | - Maosen Xu
- Laboratory of Aging Research and Cancer Drug Target, State Key Laboratory of Biotherapy, West China Hospital, National Clinical Research, Sichuan University Chengdu Sichuan China
| | - Yuanling Meng
- West China School of Stomatology Sichuan University Chengdu Sichuan China
| | - Haozhen Sun
- College of Life Sciences Sichuan University Chengdu Sichuan China
| | - Tao Zhang
- Department of Biotherapy Cancer Center, West China Hospital, West China Medical School Sichuan University Chengdu China
| | - Hanle Yang
- Department of Biotherapy Cancer Center, West China Hospital, West China Medical School Sichuan University Chengdu China
| | - Yueyi Li
- Department of Biotherapy Cancer Center, West China Hospital, West China Medical School Sichuan University Chengdu China
| | - Xuelei Ma
- Department of Biotherapy Cancer Center, West China Hospital, West China Medical School Sichuan University Chengdu China
| |
Collapse
|
4
|
Souza LA, Passos LA, Santana MCS, Mendel R, Rauber D, Ebigbo A, Probst A, Messmann H, Papa JP, Palm C. Layer-selective deep representation to improve esophageal cancer classification. Med Biol Eng Comput 2024; 62:3355-3372. [PMID: 38848031 DOI: 10.1007/s11517-024-03142-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 05/25/2024] [Indexed: 10/17/2024]
Abstract
Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis. For this task, the deep learning techniques' black-box nature must somehow be lightened up to clarify its promising results. Hence, we aim to investigate the impact of the ResNet-50 deep convolutional design for Barrett's esophagus and adenocarcinoma classification. For such a task, and aiming at proposing a two-step learning technique, the output of each convolutional layer that composes the ResNet-50 architecture was trained and classified for further definition of layers that would provide more impact in the architecture. We showed that local information and high-dimensional features are essential to improve the classification for our task. Besides, we observed a significant improvement when the most discriminative layers expressed more impact in the training and classification of ResNet-50 for Barrett's esophagus and adenocarcinoma classification, demonstrating that both human knowledge and computational processing may influence the correct learning of such a problem.
Collapse
Affiliation(s)
- Luis A Souza
- Department of Informatics, Espírito Santo Federal University, Vitória, Brazil.
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany.
| | - Leandro A Passos
- CMI Lab, School of Engineering and Informatics, University of Wolverhampton, Wolverhampton, UK
| | | | - Robert Mendel
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany
| | - David Rauber
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany
| | - Alanna Ebigbo
- Department of Gastroenterology, University Hospital Augsburg, Augsburg, Germany
| | - Andreas Probst
- Department of Gastroenterology, University Hospital Augsburg, Augsburg, Germany
| | - Helmut Messmann
- Department of Gastroenterology, University Hospital Augsburg, Augsburg, Germany
| | - João Paulo Papa
- Department of Computing, São Paulo State University, Bauru, Brazil
| | - Christoph Palm
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany
| |
Collapse
|
5
|
Sreedharan JK, Saleh F, Alqahtani A, Albalawi IA, Gopalakrishnan GK, Alahmed HA, Alsultan BA, Alalharith DM, Alnasser M, Alahmari AD, Karthika M. Applications of artificial intelligence in emergency and critical care diagnostics: a systematic review and meta-analysis. Front Artif Intell 2024; 7:1422551. [PMID: 39430618 PMCID: PMC11487586 DOI: 10.3389/frai.2024.1422551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 09/23/2024] [Indexed: 10/22/2024] Open
Abstract
Introduction Artificial intelligence has come to be the highlight in almost all fields of science. It uses various models and algorithms to detect patterns and specific findings to diagnose a disease with utmost accuracy. With the increasing need for accurate and precise diagnosis of disease, employing artificial intelligence models and concepts in healthcare setup can be beneficial. Methodology The search engines and databases employed in this study are PubMed, ScienceDirect and Medline. Studies published between 1st January 2013 to 1st February 2023 were included in this analysis. The selected articles were screened preliminarily using the Rayyan web tool, after which investigators screened the selected articles individually. The risk of bias for the selected studies was assessed using QUADAS-2 tool specially designed to test bias among studies related to diagnostic test reviews. Results In this review, 17 studies were included from a total of 12,173 studies. These studies were analysed for their sensitivity, accuracy, positive predictive value, specificity and negative predictive value in diagnosing barrette's neoplasia, cardiac arrest, esophageal adenocarcinoma, sepsis and gastrointestinal stromal tumors. All the studies reported heterogeneity with p-value <0.05 at confidence interval 95%. Conclusion The existing evidential data suggests that artificial intelligence can be highly helpful in the field of diagnosis providing maximum precision and early detection. This helps to prevent disease progression and also helps to provide treatment at the earliest. Employing artificial intelligence in diagnosis will define the advancement of health care environment and also be beneficial in every aspect concerned with treatment to illnesses.
Collapse
Affiliation(s)
- Jithin K. Sreedharan
- Department of Respiratory Therapy, College of Health Sciences, University of Doha for Science and Technology, Doha, Qatar
| | - Fred Saleh
- Deanship—College of Health Sciences, University of Doha for Science and Technology, Doha, Qatar
| | - Abdullah Alqahtani
- Department of Respiratory Care, Prince Sultan Military College of Health Sciences, Dammam, Saudi Arabia
| | - Ibrahim Ahmed Albalawi
- Department of Respiratory Care, Prince Sultan Military College of Health Sciences, Dammam, Saudi Arabia
| | | | | | | | | | - Musallam Alnasser
- Department of Respiratory Care, Prince Sultan Military College of Health Sciences, Dammam, Saudi Arabia
| | - Ayedh Dafer Alahmari
- Department of Rehabilitation Science, College of Applied Medical Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Manjush Karthika
- Faculty of Medical and Health Sciences, Liwa College, Abu Dhabi, United Arab Emirates
| |
Collapse
|
6
|
Janaki R, Lakshmi D. Hybrid model-based early diagnosis of esophageal disorders using convolutional neural network and refined logistic regression. EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING 2024; 2024:19. [DOI: 10.1186/s13640-024-00634-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 07/28/2024] [Indexed: 01/04/2025]
|
7
|
Ragab DA, Fayed S, Ghatwary N. DeepCSFusion: Deep Compressive Sensing Fusion for Efficient COVID-19 Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1346-1358. [PMID: 38381386 PMCID: PMC11300776 DOI: 10.1007/s10278-024-01011-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 12/04/2023] [Accepted: 12/05/2023] [Indexed: 02/22/2024]
Abstract
Worldwide, the COVID-19 epidemic, which started in 2019, has resulted in millions of deaths. The medical research community has widely used computer analysis of medical data during the pandemic, specifically deep learning models. Deploying models on devices with constrained resources is a significant challenge due to the increased storage demands associated with larger deep learning models. Accordingly, in this paper, we propose a novel compression strategy that compresses deep features with a compression ratio of 10 to 90% to accurately classify the COVID-19 and non-COVID-19 computed tomography scans. Additionally, we extensively validated the compression using various available deep learning methods to extract the most suitable features from different models. Finally, the suggested DeepCSFusion model compresses the extracted features and applies fusion to achieve the highest classification accuracy with fewer features. The proposed DeepCSFusion model was validated on the publicly available dataset "SARS-CoV-2 CT" scans composed of 1252 CT. This study demonstrates that the proposed DeepCSFusion reduced the computational time with an overall accuracy of 99.3%. Also, it outperforms state-of-the-art pipelines in terms of various classification measures.
Collapse
Affiliation(s)
- Dina A Ragab
- Electronics & Communications Engineering Department, Arab Academy for Science, Technology, and Maritime Transport (AASTMT), Smart Village Campus, Giza, Egypt.
| | - Salema Fayed
- Computer Engineering Department, Arab Academy for Science Technology, and Maritime Transport (AASTMT), Smart Village Campus, Giza, Egypt
| | - Noha Ghatwary
- Computer Engineering Department, Arab Academy for Science Technology, and Maritime Transport (AASTMT), Smart Village Campus, Giza, Egypt
| |
Collapse
|
8
|
Lin Q, Tan W, Cai S, Yan B, Li J, Zhong Y. Lesion-Decoupling-Based Segmentation With Large-Scale Colon and Esophageal Datasets for Early Cancer Diagnosis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:11142-11156. [PMID: 37028330 DOI: 10.1109/tnnls.2023.3248804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Lesions of early cancers often show flat, small, and isochromatic characteristics in medical endoscopy images, which are difficult to be captured. By analyzing the differences between the internal and external features of the lesion area, we propose a lesion-decoupling-based segmentation (LDS) network for assisting early cancer diagnosis. We introduce a plug-and-play module called self-sampling similar feature disentangling module (FDM) to obtain accurate lesion boundaries. Then, we propose a feature separation loss (FSL) function to separate pathological features from normal ones. Moreover, since physicians make diagnoses with multimodal data, we propose a multimodal cooperative segmentation network with two different modal images as input: white-light images (WLIs) and narrowband images (NBIs). Our FDM and FSL show a good performance for both single-modal and multimodal segmentations. Extensive experiments on five backbones prove that our FDM and FSL can be easily applied to different backbones for a significant lesion segmentation accuracy improvement, and the maximum increase of mean Intersection over Union (mIoU) is 4.58. For colonoscopy, we can achieve up to mIoU of 91.49 on our Dataset A and 84.41 on the three public datasets. For esophagoscopy, mIoU of 64.32 is best achieved on the WLI dataset and 66.31 on the NBI dataset.
Collapse
|
9
|
Kikuchi R, Okamoto K, Ozawa T, Shibata J, Ishihara S, Tada T. Endoscopic Artificial Intelligence for Image Analysis in Gastrointestinal Neoplasms. Digestion 2024; 105:419-435. [PMID: 39068926 DOI: 10.1159/000540251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 07/02/2024] [Indexed: 07/30/2024]
Abstract
BACKGROUND Artificial intelligence (AI) using deep learning systems has recently been utilized in various medical fields. In the field of gastroenterology, AI is primarily implemented in image recognition and utilized in the realm of gastrointestinal (GI) endoscopy. In GI endoscopy, computer-aided detection/diagnosis (CAD) systems assist endoscopists in GI neoplasm detection or differentiation of cancerous or noncancerous lesions. Several AI systems for colorectal polyps have already been applied in colonoscopy clinical practices. In esophagogastroduodenoscopy, a few CAD systems for upper GI neoplasms have been launched in Asian countries. The usefulness of these CAD systems in GI endoscopy has been gradually elucidated. SUMMARY In this review, we outline recent articles on several studies of endoscopic AI systems for GI neoplasms, focusing on esophageal squamous cell carcinoma (ESCC), esophageal adenocarcinoma (EAC), gastric cancer (GC), and colorectal polyps. In ESCC and EAC, computer-aided detection (CADe) systems were mainly developed, and a recent meta-analysis study showed sensitivities of 91.2% and 93.1% and specificities of 80% and 86.9%, respectively. In GC, a recent meta-analysis study on CADe systems demonstrated that their sensitivity and specificity were as high as 90%. A randomized controlled trial (RCT) also showed that the use of the CADe system reduced the miss rate. Regarding computer-aided diagnosis (CADx) systems for GC, although RCTs have not yet been conducted, most studies have demonstrated expert-level performance. In colorectal polyps, multiple RCTs have shown the usefulness of the CADe system for improving the polyp detection rate, and several CADx systems have been shown to have high accuracy in colorectal polyp differentiation. KEY MESSAGES Most analyses of endoscopic AI systems suggested that their performance was better than that of nonexpert endoscopists and equivalent to that of expert endoscopists. Thus, endoscopic AI systems may be useful for reducing the risk of overlooking lesions and improving the diagnostic ability of endoscopists.
Collapse
Affiliation(s)
- Ryosuke Kikuchi
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kazuaki Okamoto
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tsuyoshi Ozawa
- Tomohiro Tada the Institute of Gastroenterology and Proctology, Saitama, Japan
- AI Medical Service Inc., Tokyo, Japan
| | - Junichi Shibata
- Tomohiro Tada the Institute of Gastroenterology and Proctology, Saitama, Japan
- AI Medical Service Inc., Tokyo, Japan
| | - Soichiro Ishihara
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tomohiro Tada
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
- Tomohiro Tada the Institute of Gastroenterology and Proctology, Saitama, Japan
- AI Medical Service Inc., Tokyo, Japan
| |
Collapse
|
10
|
Jong MR, de Groof AJ. Advancement of artificial intelligence systems for surveillance endoscopy of Barrett's esophagus. Dig Liver Dis 2024; 56:1126-1130. [PMID: 38071181 DOI: 10.1016/j.dld.2023.11.038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 11/27/2023] [Accepted: 11/28/2023] [Indexed: 06/29/2024]
Abstract
Barrett's esophagus (BE) is a precursor disease for esophageal adenocarcinoma. Timely detection and treatment has significant influence on patient outcomes. Over the last years, several artificial intelligence (AI) systems have emerged to assist the endoscopist. The primary focus of research has been computer aided detection (CADe). Several groups have succeeded in developing competitive models for neoplasia detection. Additionally, computer aided diagnosis (CADx) models have been developed for subsequent lesion characterization and assistance in clinical decision making. Future studies should focus on bridging the domain gap between academic development and integration in daily practice.
Collapse
Affiliation(s)
- M R Jong
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands
| | - A J de Groof
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands.
| |
Collapse
|
11
|
Jian M, Tao C, Wu R, Zhang H, Li X, Wang R, Wang Y, Peng L, Zhu J. HRU-Net: A high-resolution convolutional neural network for esophageal cancer radiotherapy target segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108177. [PMID: 38648704 DOI: 10.1016/j.cmpb.2024.108177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 04/10/2024] [Accepted: 04/13/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND AND OBJECTIVE The effective segmentation of esophageal squamous carcinoma lesions in CT scans is significant for auxiliary diagnosis and treatment. However, accurate lesion segmentation is still a challenging task due to the irregular form of the esophagus and small size, the inconsistency of spatio-temporal structure, and low contrast of esophagus and its peripheral tissues in medical images. The objective of this study is to improve the segmentation effect of esophageal squamous cell carcinoma lesions. METHODS It is critical for a segmentation network to effectively extract 3D discriminative features to distinguish esophageal cancers from some visually closed adjacent esophageal tissues and organs. In this work, an efficient HRU-Net architecture (High-Resolution U-Net) was exploited for esophageal cancer and esophageal carcinoma segmentation in CT slices. Based on the idea of localization first and segmentation later, the HRU-Net locates the esophageal region before segmentation. In addition, an Resolution Fusion Module (RFM) was designed to integrate the information of adjacent resolution feature maps to obtain strong semantic information, as well as preserve the high-resolution features. RESULTS Compared with the other five typical methods, the devised HRU-Net is capable of generating superior segmentation results. CONCLUSIONS Our proposed HRU-NET improves the accuracy of segmentation for squamous esophageal cancer. Compared to other models, our model performs the best. The designed method may improve the efficiency of clinical diagnosis of esophageal squamous cell carcinoma lesions.
Collapse
Affiliation(s)
- Muwei Jian
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China; School of Information Science and Technology, Linyi University, Linyi, China.
| | - Chen Tao
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Ronghua Wu
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Haoran Zhang
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
| | - Xiaoguang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Rui Wang
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
| | - Yanlei Wang
- Youth League Committee, Shandong University of Political Science and Law, Jinan, China
| | - Lizhi Peng
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, University of Jinan, Jinan, China
| | - Jian Zhu
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital affiliated to Shandong First Medical University, Jinan, China
| |
Collapse
|
12
|
Di J, Lu XS, Sun M, Zhao ZM, Zhang CD. Hospital volume-mortality association after esophagectomy for cancer: a systematic review and meta-analysis. Int J Surg 2024; 110:3021-3029. [PMID: 38353697 PMCID: PMC11093504 DOI: 10.1097/js9.0000000000001185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 01/29/2024] [Indexed: 05/16/2024]
Abstract
BACKGROUND Postoperative mortality plays an important role in evaluating the surgical safety of esophagectomy. Although postoperative mortality after esophagectomy is partly influenced by the yearly hospital surgical case volume (hospital volume), this association remains unclear. METHODS Studies assessing the association between hospital volume and postoperative mortality in patients who underwent esophagectomy for esophageal cancer were searched for eligibility. Odds ratios were pooled for the highest versus lowest categories of hospital volume using a random effects model. The dose-response association between hospital volume and the risk of postoperative mortality was analyzed. The study protocol was registered with PROSPERO. RESULTS Fifty-six studies including 385 469 participants were included. A higher-volume hospital significantly reduced the risk of postesophagectomy mortality by 53% compared with their lower-volume counterparts (odds ratio, 0.47; 95% CI: 0.42-0.53). Similar results were found in subgroup analyses. Volume-outcome analysis suggested that postesophagectomy mortality rates remained roughly stable after the hospital volume reached a plateau of 45 esophagectomies per year. CONCLUSIONS Higher-volume hospitals had significantly lower postesophagectomy mortality rates in patients with esophageal cancer, with a threshold of 45 esophagectomies per year for a high-volume hospital. This remarkable negative correlation showed the benefit of a better safety in centralization of esophagectomy to a high-volume hospital.
Collapse
Affiliation(s)
| | | | - Min Sun
- Department of General Surgery, Taihe Hospital, Hubei University of Medicine, Shiyan, People’s Republic of China
| | - Zhe-Ming Zhao
- Department of Surgical Oncology, The Fourth Affiliated Hospital of China Medical University, Shenyang
| | - Chun-Dong Zhang
- Central Laboratory
- Department of Surgical Oncology, The Fourth Affiliated Hospital of China Medical University, Shenyang
| |
Collapse
|
13
|
Dijkhuis TH, Bijlstra OD, Warmerdam MI, Faber RA, Linders DGJ, Galema HA, Broersen A, Dijkstra J, Kuppen PJK, Vahrmeijer AL, Mieog JSD. Semi-automatic standardized analysis method to objectively evaluate near-infrared fluorescent dyes in image-guided surgery. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:026001. [PMID: 38312853 PMCID: PMC10833575 DOI: 10.1117/1.jbo.29.2.026001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 12/18/2023] [Accepted: 12/20/2023] [Indexed: 02/06/2024]
Abstract
Significance Near-infrared fluorescence imaging still lacks a standardized, objective method to evaluate fluorescent dye efficacy in oncological surgical applications. This results in difficulties in translation between preclinical to clinical studies with fluorescent dyes and in the reproduction of results between studies, which in turn hampers further clinical translation of novel fluorescent dyes. Aim Our aim is to develop and evaluate a semi-automatic standardized method to objectively assess fluorescent signals in resected tissue. Approach A standardized imaging procedure was designed and quantitative analysis methods were developed to evaluate non-targeted and tumor-targeted fluorescent dyes. The developed analysis methods included manual selection of region of interest (ROI) on white light images, automated fluorescence signal ROI selection, and automatic quantitative image analysis. The proposed analysis method was then compared with a conventional analysis method, where fluorescence signal ROIs were manually selected on fluorescence images. Dice similarity coefficients and intraclass correlation coefficients were calculated to determine the inter- and intraobserver variabilities of the ROI selections and the determined signal- and tumor-to-background ratios. Results The proposed non-targeted fluorescent dyes analysis method showed statistically significantly improved variabilities after application on indocyanine green specimens. For specimens with the targeted dye SGM-101, the variability of the background ROI selection was statistically significantly improved by implementing the proposed method. Conclusion Semi-automatic methods for standardized quantitative analysis of fluorescence images were successfully developed and showed promising results to further improve the reproducibility and standardization of clinical studies evaluating fluorescent dyes.
Collapse
Affiliation(s)
- Tom H. Dijkhuis
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
| | - Okker D. Bijlstra
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
- Amsterdam University Medical Center, Cancer Center Amsterdam, Department of Surgery, Amsterdam, The Netherlands
| | - Mats I. Warmerdam
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
- Centre of Human Drug Research, Leiden, The Netherlands
| | - Robin A. Faber
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
| | - Daan G. J. Linders
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
| | - Hidde A. Galema
- Erasmus MC Cancer Institute, Department of Surgical Oncology and Gastrointestinal Surgery, Rotterdam, The Netherlands
| | - Alexander Broersen
- Leiden University Medical Center, Department of Radiology, Leiden, The Netherlands
| | - Jouke Dijkstra
- Leiden University Medical Center, Department of Radiology, Leiden, The Netherlands
| | - Peter J. K. Kuppen
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
| | | | - Jan Sven David Mieog
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
| |
Collapse
|
14
|
Ahn JC, Shah VH. Artificial intelligence in gastroenterology and hepatology. ARTIFICIAL INTELLIGENCE IN CLINICAL PRACTICE 2024:443-464. [DOI: 10.1016/b978-0-443-15688-5.00016-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
15
|
Fockens KN, Jong MR, Jukema JB, Boers TGW, Kusters CHJ, van der Putten JA, Pouw RE, Duits LC, Montazeri NSM, van Munster SN, Weusten BLAM, Alvarez Herrero L, Houben MHMG, Nagengast WB, Westerhof J, Alkhalaf A, Mallant-Hent RC, Scholten P, Ragunath K, Seewald S, Elbe P, Baldaque-Silva F, Barret M, Ortiz Fernández-Sordo J, Villarejo GM, Pech O, Beyna T, van der Sommen F, de With PH, de Groof AJ, Bergman JJ. A deep learning system for detection of early Barrett's neoplasia: a model development and validation study. Lancet Digit Health 2023; 5:e905-e916. [PMID: 38000874 DOI: 10.1016/s2589-7500(23)00199-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 08/22/2023] [Accepted: 09/18/2023] [Indexed: 11/26/2023]
Abstract
BACKGROUND Computer-aided detection (CADe) systems could assist endoscopists in detecting early neoplasia in Barrett's oesophagus, which could be difficult to detect in endoscopic images. The aim of this study was to develop, test, and benchmark a CADe system for early neoplasia in Barrett's oesophagus. METHODS The CADe system was first pretrained with ImageNet followed by domain-specific pretraining with GastroNet. We trained the CADe system on a dataset of 14 046 images (2506 patients) of confirmed Barrett's oesophagus neoplasia and non-dysplastic Barrett's oesophagus from 15 centres. Neoplasia was delineated by 14 Barrett's oesophagus experts for all datasets. We tested the performance of the CADe system on two independent test sets. The all-comers test set comprised 327 (73 patients) non-dysplastic Barrett's oesophagus images, 82 (46 patients) neoplastic images, 180 (66 of the same patients) non-dysplastic Barrett's oesophagus videos, and 71 (45 of the same patients) neoplastic videos. The benchmarking test set comprised 100 (50 patients) neoplastic images, 300 (125 patients) non-dysplastic images, 47 (47 of the same patients) neoplastic videos, and 141 (82 of the same patients) non-dysplastic videos, and was enriched with subtle neoplasia cases. The benchmarking test set was evaluated by 112 endoscopists from six countries (first without CADe and, after 6 weeks, with CADe) and by 28 external international Barrett's oesophagus experts. The primary outcome was the sensitivity of Barrett's neoplasia detection by general endoscopists without CADe assistance versus with CADe assistance on the benchmarking test set. We compared sensitivity using a mixed-effects logistic regression model with conditional odds ratios (ORs; likelihood profile 95% CIs). FINDINGS Sensitivity for neoplasia detection among endoscopists increased from 74% to 88% with CADe assistance (OR 2·04; 95% CI 1·73-2·42; p<0·0001 for images and from 67% to 79% [2·35; 1·90-2·94; p<0·0001] for video) without compromising specificity (from 89% to 90% [1·07; 0·96-1·19; p=0·20] for images and from 96% to 94% [0·94; 0·79-1·11; ] for video; p=0·46). In the all-comers test set, CADe detected neoplastic lesions in 95% (88-98) of images and 97% (90-99) of videos. In the benchmarking test set, the CADe system was superior to endoscopists in detecting neoplasia (90% vs 74% [OR 3·75; 95% CI 1·93-8·05; p=0·0002] for images and 91% vs 67% [11·68; 3·85-47·53; p<0·0001] for video) and non-inferior to Barrett's oesophagus experts (90% vs 87% [OR 1·74; 95% CI 0·83-3·65] for images and 91% vs 86% [2·94; 0·99-11·40] for video). INTERPRETATION CADe outperformed endoscopists in detecting Barrett's oesophagus neoplasia and, when used as an assistive tool, it improved their detection rate. CADe detected virtually all neoplasia in a test set of consecutive cases. FUNDING Olympus.
Collapse
Affiliation(s)
- K N Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology, Endocrinology and Metabolism, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands
| | - M R Jong
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology, Endocrinology and Metabolism, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands
| | - J B Jukema
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology, Endocrinology and Metabolism, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands
| | - T G W Boers
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - C H J Kusters
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - J A van der Putten
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - R E Pouw
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology, Endocrinology and Metabolism, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands
| | - L C Duits
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology, Endocrinology and Metabolism, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands
| | - N S M Montazeri
- Biostatistics Unit, Department of Gastroenterology and Hepatology, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands
| | - S N van Munster
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology, Endocrinology and Metabolism, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands; Department of Gastroenterology and Hepatology, St Antonius Hospital, Nieuwegein, Netherlands
| | - B L A M Weusten
- Department of Gastroenterology and Hepatology, UMC Utrecht, University of Utrecht, Utrecht, Netherlands; Department of Gastroenterology and Hepatology, St Antonius Hospital, Nieuwegein, Netherlands
| | - L Alvarez Herrero
- Department of Gastroenterology and Hepatology, St Antonius Hospital, Nieuwegein, Netherlands
| | - M H M G Houben
- Department of Gastroenterology and Hepatology, HagaZiekenhuis Den Haag, Den Haag, Netherlands
| | - W B Nagengast
- Department of Gastroenterology and Hepatology, UMC Groningen, University of Groningen, Groningen, Netherlands
| | - J Westerhof
- Department of Gastroenterology and Hepatology, UMC Groningen, University of Groningen, Groningen, Netherlands
| | - A Alkhalaf
- Department of Gastroenterology and Hepatology, Isala Hospital Zwolle, Zwolle, Netherlands
| | - R C Mallant-Hent
- Department of Gastroenterology and Hepatology, Flevoziekenhuis Almere, Almere, Netherlands
| | - P Scholten
- Department of Gastroenterology and Hepatology, Onze Lieve Vrouwe Gasthuis, Amsterdam, Netherlands
| | - K Ragunath
- Department of Gastroenterology and Hepatology, Royal Perth Hospital, Curtin University, Perth, WA, Australia
| | - S Seewald
- Department of Gastroenterology and Hepatology, Hirslanden Klinik, Zurich, Switzerland
| | - P Elbe
- Department of Digestive Diseases, Karolinska University Hospital, Stockholm, Sweden; Division of Surgery, Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden
| | - F Baldaque-Silva
- Department of Digestive Diseases, Karolinska University Hospital, Stockholm, Sweden; Center for Advanced Endoscopy Carlos Moreira da Silva, Gastroenterology Department, Pedro Hispano Hospital, Matosinhos, Portugal
| | - M Barret
- Department of Gastroenterology and Hepatology, Cochin Hospital Paris, Paris, France
| | - J Ortiz Fernández-Sordo
- Department of Gastroenterology and Hepatology, Nottingham University Hospitals NHS Trust, Nottingham, UK
| | - G Moral Villarejo
- Department of Gastroenterology and Hepatology, Nottingham University Hospitals NHS Trust, Nottingham, UK
| | - O Pech
- Department of Gastroenterology and Hepatology, St John of God Hospital, Regensburg, Germany
| | - T Beyna
- Department of Gastroenterology and Hepatology, Evangalisches Krankenhaus Düsseldorf, Düsseldorf, Germany
| | - F van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - P H de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - A J de Groof
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology, Endocrinology and Metabolism, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands
| | - J J Bergman
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology, Endocrinology and Metabolism, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands.
| |
Collapse
|
16
|
Guidozzi N, Menon N, Chidambaram S, Markar SR. The role of artificial intelligence in the endoscopic diagnosis of esophageal cancer: a systematic review and meta-analysis. Dis Esophagus 2023; 36:doad048. [PMID: 37480192 PMCID: PMC10789250 DOI: 10.1093/dote/doad048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 07/23/2023]
Abstract
Early detection of esophageal cancer is limited by accurate endoscopic diagnosis of subtle macroscopic lesions. Endoscopic interpretation is subject to expertise, diagnostic skill, and thus human error. Artificial intelligence (AI) in endoscopy is increasingly bridging this gap. This systematic review and meta-analysis consolidate the evidence on the use of AI in the endoscopic diagnosis of esophageal cancer. The systematic review was carried out using Pubmed, MEDLINE and Ovid EMBASE databases and articles on the role of AI in the endoscopic diagnosis of esophageal cancer management were included. A meta-analysis was also performed. Fourteen studies (1590 patients) assessed the use of AI in endoscopic diagnosis of esophageal squamous cell carcinoma-the pooled sensitivity and specificity were 91.2% (84.3-95.2%) and 80% (64.3-89.9%). Nine studies (478 patients) assessed AI capabilities of diagnosing esophageal adenocarcinoma with the pooled sensitivity and specificity of 93.1% (86.8-96.4) and 86.9% (81.7-90.7). The remaining studies formed the qualitative summary. AI technology, as an adjunct to endoscopy, can assist in accurate, early detection of esophageal malignancy. It has shown superior results to endoscopists alone in identifying early cancer and assessing depth of tumor invasion, with the added benefit of not requiring a specialized skill set. Despite promising results, the application in real-time endoscopy is limited, and further multicenter trials are required to accurately assess its use in routine practice.
Collapse
Affiliation(s)
- Nadia Guidozzi
- Department of General Surgery, University of Witwatersrand, Johannesburg, South Africa
| | - Nainika Menon
- Department of General Surgery, Oxford University Hospitals, Oxford, UK
| | - Swathikan Chidambaram
- Academic Surgical Unit, Department of Surgery and Cancer, Imperial College London, St Mary’s Hospital, London, UK
| | - Sheraz Rehan Markar
- Department of General Surgery, Oxford University Hospitals, Oxford, UK
- Nuffield Department of Surgery, University of Oxford, Oxford, UK
| |
Collapse
|
17
|
Zhang JQ, Mi JJ, Wang R. Application of convolutional neural network-based endoscopic imaging in esophageal cancer or high-grade dysplasia: A systematic review and meta-analysis. World J Gastrointest Oncol 2023; 15:1998-2016. [DOI: 10.4251/wjgo.v15.i11.1998] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/05/2023] [Accepted: 10/11/2023] [Indexed: 11/15/2023] Open
Abstract
BACKGROUND Esophageal cancer is the seventh-most common cancer type worldwide, accounting for 5% of death from malignancy. Development of novel diagnostic techniques has facilitated screening, early detection, and improved prognosis. Convolutional neural network (CNN)-based image analysis promises great potential for diagnosing and determining the prognosis of esophageal cancer, enabling even early detection of dysplasia.
AIM To conduct a meta-analysis of the diagnostic accuracy of CNN models for the diagnosis of esophageal cancer and high-grade dysplasia (HGD).
METHODS PubMed, EMBASE, Web of Science and Cochrane Library databases were searched for articles published up to November 30, 2022. We evaluated the diagnostic accuracy of using the CNN model with still image-based analysis and with video-based analysis for esophageal cancer or HGD, as well as for the invasion depth of esophageal cancer. The pooled sensitivity, pooled specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR) and area under the curve (AUC) were estimated, together with the 95% confidence intervals (CI). A bivariate method and hierarchical summary receiver operating characteristic method were used to calculate the diagnostic test accuracy of the CNN model. Meta-regression and subgroup analyses were used to identify sources of heterogeneity.
RESULTS A total of 28 studies were included in this systematic review and meta-analysis. Using still image-based analysis for the diagnosis of esophageal cancer or HGD provided a pooled sensitivity of 0.95 (95%CI: 0.92-0.97), pooled specificity of 0.92 (0.89-0.94), PLR of 11.5 (8.3-16.0), NLR of 0.06 (0.04-0.09), DOR of 205 (115-365), and AUC of 0.98 (0.96-0.99). When video-based analysis was used, a pooled sensitivity of 0.85 (0.77-0.91), pooled specificity of 0.73 (0.59-0.83), PLR of 3.1 (1.9-5.0), NLR of 0.20 (0.12-0.34), DOR of 15 (6-38) and AUC of 0.87 (0.84-0.90) were found. Prediction of invasion depth resulted in a pooled sensitivity of 0.90 (0.87-0.92), pooled specificity of 0.83 (95%CI: 0.76-0.88), PLR of 7.8 (1.9-32.0), NLR of 0.10 (0.41-0.25), DOR of 118 (11-1305), and AUC of 0.95 (0.92-0.96).
CONCLUSION CNN-based image analysis in diagnosing esophageal cancer and HGD is an excellent diagnostic method with high sensitivity and specificity that merits further investigation in large, multicenter clinical trials.
Collapse
Affiliation(s)
- Jun-Qi Zhang
- The Fifth Clinical Medical College, Shanxi Medical University, Taiyuan 030001, Shanxi Province, China
| | - Jun-Jie Mi
- Department of Gastroenterology, Shanxi Provincial People’s Hospital, Taiyuan 030012, Shanxi Province, China
| | - Rong Wang
- Department of Gastroenterology, The Fifth Hospital of Shanxi Medical University (Shanxi Provincial People’s Hospital), Taiyuan 030012, Shanxi Province, China
| |
Collapse
|
18
|
Cui R, Wang L, Lin L, Li J, Lu R, Liu S, Liu B, Gu Y, Zhang H, Shang Q, Chen L, Tian D. Deep Learning in Barrett's Esophagus Diagnosis: Current Status and Future Directions. Bioengineering (Basel) 2023; 10:1239. [PMID: 38002363 PMCID: PMC10669008 DOI: 10.3390/bioengineering10111239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/26/2023] Open
Abstract
Barrett's esophagus (BE) represents a pre-malignant condition characterized by abnormal cellular proliferation in the distal esophagus. A timely and accurate diagnosis of BE is imperative to prevent its progression to esophageal adenocarcinoma, a malignancy associated with a significantly reduced survival rate. In this digital age, deep learning (DL) has emerged as a powerful tool for medical image analysis and diagnostic applications, showcasing vast potential across various medical disciplines. In this comprehensive review, we meticulously assess 33 primary studies employing varied DL techniques, predominantly featuring convolutional neural networks (CNNs), for the diagnosis and understanding of BE. Our primary focus revolves around evaluating the current applications of DL in BE diagnosis, encompassing tasks such as image segmentation and classification, as well as their potential impact and implications in real-world clinical settings. While the applications of DL in BE diagnosis exhibit promising results, they are not without challenges, such as dataset issues and the "black box" nature of models. We discuss these challenges in the concluding section. Essentially, while DL holds tremendous potential to revolutionize BE diagnosis, addressing these challenges is paramount to harnessing its full capacity and ensuring its widespread application in clinical practice.
Collapse
Affiliation(s)
- Ruichen Cui
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Lei Wang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Lin Lin
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Jie Li
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Runda Lu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Shixiang Liu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Bowei Liu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Yimin Gu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Hanlu Zhang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Qixin Shang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Longqi Chen
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Dong Tian
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| |
Collapse
|
19
|
Tee CHN, Ravi R, Ang TL, Li JW. Role of artificial intelligence in Barrett’s esophagus. Artif Intell Gastroenterol 2023; 4:28-35. [DOI: 10.35712/aig.v4.i2.28] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 05/17/2023] [Accepted: 06/12/2023] [Indexed: 09/07/2023] Open
Abstract
The application of artificial intelligence (AI) in gastrointestinal endoscopy has gained significant traction over the last decade. One of the more recent applications of AI in this field includes the detection of dysplasia and cancer in Barrett’s esophagus (BE). AI using deep learning methods has shown promise as an adjunct to the endoscopist in detecting dysplasia and cancer. Apart from visual detection and diagnosis, AI may also aid in reducing the considerable interobserver variability in identifying and distinguishing dysplasia on whole slide images from digitized BE histology slides. This review aims to provide a comprehensive summary of the key studies thus far as well as providing an insight into the future role of AI in Barrett’s esophagus.
Collapse
Affiliation(s)
- Chin Hock Nicholas Tee
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore 529889, Singapore
| | - Rajesh Ravi
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore 529889, Singapore
| | - Tiing Leong Ang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore 529889, Singapore
| | - James Weiquan Li
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore 529889, Singapore
| |
Collapse
|
20
|
Hosseini F, Asadi F, Emami H, Harari RE. Machine learning applications for early detection of esophageal cancer: a systematic review. BMC Med Inform Decis Mak 2023; 23:124. [PMID: 37460991 PMCID: PMC10351192 DOI: 10.1186/s12911-023-02235-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 07/12/2023] [Indexed: 07/20/2023] Open
Abstract
INTRODUCTION Esophageal cancer (EC) is a significant global health problem, with an estimated 7th highest incidence and 6th highest mortality rate. Timely diagnosis and treatment are critical for improving patients' outcomes, as over 40% of patients with EC are diagnosed after metastasis. Recent advances in machine learning (ML) techniques, particularly in computer vision, have demonstrated promising applications in medical image processing, assisting clinicians in making more accurate and faster diagnostic decisions. Given the significance of early detection of EC, this systematic review aims to summarize and discuss the current state of research on ML-based methods for the early detection of EC. METHODS We conducted a comprehensive systematic search of five databases (PubMed, Scopus, Web of Science, Wiley, and IEEE) using search terms such as "ML", "Deep Learning (DL (", "Neural Networks (NN)", "Esophagus", "EC" and "Early Detection". After applying inclusion and exclusion criteria, 31 articles were retained for full review. RESULTS The results of this review highlight the potential of ML-based methods in the early detection of EC. The average accuracy of the reviewed methods in the analysis of endoscopic and computed tomography (CT (images of the esophagus was over 89%, indicating a high impact on early detection of EC. Additionally, the highest percentage of clinical images used in the early detection of EC with the use of ML was related to white light imaging (WLI) images. Among all ML techniques, methods based on convolutional neural networks (CNN) achieved higher accuracy and sensitivity in the early detection of EC compared to other methods. CONCLUSION Our findings suggest that ML methods may improve accuracy in the early detection of EC, potentially supporting radiologists, endoscopists, and pathologists in diagnosis and treatment planning. However, the current literature is limited, and more studies are needed to investigate the clinical applications of these methods in early detection of EC. Furthermore, many studies suffer from class imbalance and biases, highlighting the need for validation of detection algorithms across organizations in longitudinal studies.
Collapse
Affiliation(s)
- Farhang Hosseini
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Farkhondeh Asadi
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Hassan Emami
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
21
|
Gao XW, Taylor S, Pang W, Hui R, Lu X, Braden B. Fusion of colour contrasted images for early detection of oesophageal squamous cell dysplasia from endoscopic videos in real time. INFORMATION FUSION 2023; 92:64-79. [DOI: 10.1016/j.inffus.2022.11.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/21/2024]
|
22
|
Wang P, Cai S, Tan W, Yan B, Zhong Y. ClusterNet: a clustering distributed prior embedded detection network for early-stage esophageal squamous cell carcinoma diagnosis. Med Phys 2023; 50:854-866. [PMID: 36222486 DOI: 10.1002/mp.16041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 08/16/2022] [Accepted: 09/19/2022] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND Early and accurate diagnosis of esophageal squamous cell carcinoma (ESCC) is important for reducing mortality. Analyzing intrapapillary capillary loops' (IPCLs) patterns on magnification endoscopy with narrow band imaging (ME-NBI) has been demonstrated effective in the diagnosis of early-stage ESCC. However, even experienced endoscopists may face difficulty in finding and classifying countless IPCLs on ME-NBI. PURPOSE We propose a novel clustering prior embedded detection network: ClusterNet. ClusterNet is capable of analyzing the distribution of IPCLs on ME-NBI automatically and enables endoscopists to overview multiple types of visualization. With ClusterNet assisting, endoscopists may observe ME-NBI images more efficiently, thus they may also predict the pathology and make medical decisions more easily. METHODS We propose the first large-scale ME-NBI dataset with fine-grained annotations by consensus of expert endoscopists. The dataset is splitted into a training set and an independent testing set based on patients. With two strategies for embedding, ClusterNet can automatically take the clustering effect into consideration. Prior to this work, none of the existing approaches take the clustering effect, which is rather important in classifying the IPCLs, into account. RESULTS ClusterNet achieves an average precision of 81.2% and an average recall of 90.0% for the detection of IPCLs patterns on each patient of the independent testing set. We also compare ClusterNet with other state-of-the-art detection approaches. The performance of ClusterNet with embedding strategies is consistently superior to that of other approaches in terms of average precision, recall and F2-Score. CONCLUSIONS Experiments demonstrate that our proposed method is able to detect almost all the IPCLs patterns on ME-NBI and classify them according to the Japanese Endoscopic Society (JES) classification accurately.
Collapse
Affiliation(s)
- Peisheng Wang
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Shilun Cai
- Zhongshan Hospital, Fudan University, Shanghai, China
| | - Weimin Tan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Bo Yan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Yunshi Zhong
- Zhongshan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
23
|
Meinikheim M, Messmann H, Ebigbo A. Role of artificial intelligence in diagnosing Barrett's esophagus-related neoplasia. Clin Endosc 2023; 56:14-22. [PMID: 36646423 PMCID: PMC9902686 DOI: 10.5946/ce.2022.247] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 11/25/2022] [Indexed: 01/18/2023] Open
Abstract
Barrett's esophagus is associated with an increased risk of adenocarcinoma. Thorough screening during endoscopic surveillance is crucial to improve patient prognosis. Detecting and characterizing dysplastic or neoplastic Barrett's esophagus during routine endoscopy are challenging, even for expert endoscopists. Artificial intelligence-based clinical decision support systems have been developed to provide additional assistance to physicians performing diagnostic and therapeutic gastrointestinal endoscopy. In this article, we review the current role of artificial intelligence in the management of Barrett's esophagus and elaborate on potential artificial intelligence in the future.
Collapse
Affiliation(s)
- Michael Meinikheim
- Department of Gastroenterology, University Hospital of Augsburg, Augsburg, Germany,Correspondence: Michael Meinikheim Department of Gastroenterology, University Hospital of Augsburg, Stenglinstr. 2, D-86156 Augsburg, Germany E-mail:
| | - Helmut Messmann
- Department of Gastroenterology, University Hospital of Augsburg, Augsburg, Germany
| | - Alanna Ebigbo
- Department of Gastroenterology, University Hospital of Augsburg, Augsburg, Germany
| |
Collapse
|
24
|
Galati JS, Duve RJ, O'Mara M, Gross SA. Artificial intelligence in gastroenterology: A narrative review. Artif Intell Gastroenterol 2022; 3:117-141. [DOI: 10.35712/aig.v3.i5.117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/21/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Artificial intelligence (AI) is a complex concept, broadly defined in medicine as the development of computer systems to perform tasks that require human intelligence. It has the capacity to revolutionize medicine by increasing efficiency, expediting data and image analysis and identifying patterns, trends and associations in large datasets. Within gastroenterology, recent research efforts have focused on using AI in esophagogastroduodenoscopy, wireless capsule endoscopy (WCE) and colonoscopy to assist in diagnosis, disease monitoring, lesion detection and therapeutic intervention. The main objective of this narrative review is to provide a comprehensive overview of the research being performed within gastroenterology on AI in esophagogastroduodenoscopy, WCE and colonoscopy.
Collapse
Affiliation(s)
- Jonathan S Galati
- Department of Medicine, NYU Langone Health, New York, NY 10016, United States
| | - Robert J Duve
- Department of Internal Medicine, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY 14203, United States
| | - Matthew O'Mara
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| | - Seth A Gross
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| |
Collapse
|
25
|
Biswal MR, Delwar TS, Siddique A, Behera P, Choi Y, Ryu JY. Pattern Classification Using Quantized Neural Networks for FPGA-Based Low-Power IoT Devices. SENSORS (BASEL, SWITZERLAND) 2022; 22:8694. [PMID: 36433289 PMCID: PMC9699191 DOI: 10.3390/s22228694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 11/06/2022] [Accepted: 11/08/2022] [Indexed: 06/16/2023]
Abstract
With the recent growth of the Internet of Things (IoT) and the demand for faster computation, quantized neural networks (QNNs) or QNN-enabled IoT can offer better performance than conventional convolution neural networks (CNNs). With the aim of reducing memory access costs and increasing the computation efficiency, QNN-enabled devices are expected to transform numerous industrial applications with lower processing latency and power consumption. Another form of QNN is the binarized neural network (BNN), which has 2 bits of quantized levels. In this paper, CNN-, QNN-, and BNN-based pattern recognition techniques are implemented and analyzed on an FPGA. The FPGA hardware acts as an IoT device due to connectivity with the cloud, and QNN and BNN are considered to offer better performance in terms of low power and low resource use on hardware platforms. The CNN and QNN implementation and their comparative analysis are analyzed based on their accuracy, weight bit error, RoC curve, and execution speed. The paper also discusses various approaches that can be deployed for optimizing various CNN and QNN models with additionally available tools. The work is performed on the Xilinx Zynq 7020 series Pynq Z2 board, which serves as our FPGA-based low-power IoT device. The MNIST and CIFAR-10 databases are considered for simulation and experimentation. The work shows that the accuracy is 95.5% and 79.22% for the MNIST and CIFAR-10 databases, respectively, for full precision (32-bit), and the execution time is 5.8 ms and 18 ms for the MNIST and CIFAR-10 databases, respectively, for full precision (32-bit).
Collapse
Affiliation(s)
- Manas Ranjan Biswal
- Department of Intelligent Robot Engineering, Pukyong National University, Busan 48513, Republic of Korea
| | - Tahesin Samira Delwar
- Department of Intelligent Robot Engineering, Pukyong National University, Busan 48513, Republic of Korea
| | - Abrar Siddique
- Department of Intelligent Robot Engineering, Pukyong National University, Busan 48513, Republic of Korea
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Prangyadarsini Behera
- Department of Intelligent Robot Engineering, Pukyong National University, Busan 48513, Republic of Korea
| | - Yeji Choi
- Department of Intelligent Robot Engineering, Pukyong National University, Busan 48513, Republic of Korea
| | - Jee-Youl Ryu
- Department of Intelligent Robot Engineering, Pukyong National University, Busan 48513, Republic of Korea
| |
Collapse
|
26
|
Narasimha Raju AS, Jayavel K, Rajalakshmi T. Dexterous Identification of Carcinoma through ColoRectalCADx with Dichotomous Fusion CNN and UNet Semantic Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4325412. [PMID: 36262620 PMCID: PMC9576362 DOI: 10.1155/2022/4325412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 08/16/2022] [Accepted: 08/20/2022] [Indexed: 11/18/2022]
Abstract
Human colorectal disorders in the digestive tract are recognized by reference colonoscopy. The current system recognizes cancer through a three-stage system that utilizes two sets of colonoscopy data. However, identifying polyps by visualization has not been addressed. The proposed system is a five-stage system called ColoRectalCADx, which provides three publicly accessible datasets as input data for cancer detection. The three main datasets are CVC Clinic DB, Kvasir2, and Hyper Kvasir. After the image preprocessing stages, system experiments were performed with the seven prominent convolutional neural networks (CNNs) (end-to-end) and nine fusion CNN models to extract the spatial features. Afterwards, the end-to-end CNN and fusion features are executed. These features are derived from Discrete Wavelet Transform (DWT) and Vector Support Machine (SVM) classification, that was used to retrieve time and spatial frequency features. Experimentally, the results were obtained for five stages. For each of the three datasets, from stage 1 to stage 3, end-to-end CNN, DenseNet-201 obtained the best testing accuracy (98%, 87%, 84%), ((98%, 97%), (87%, 87%), (84%, 84%)), ((99.03%, 99%), (88.45%, 88%), (83.61%, 84%)). For each of the three datasets, from stage 2, CNN DaRD-22 fusion obtained the optimal test accuracy ((93%, 97%) (82%, 84%), (69%, 57%)). And for stage 4, ADaRDEV2-22 fusion achieved the best test accuracy ((95.73%, 94%), (81.20%, 81%), (72.56%, 58%)). For the input image segmentation datasets CVC Clinc-Seg, KvasirSeg, and Hyper Kvasir, malignant polyps were identified with the UNet CNN model. Here, the loss score datasets (CVC clinic DB was 0.7842, Kvasir2 was 0.6977, and Hyper Kvasir was 0.6910) were obtained.
Collapse
Affiliation(s)
- Akella S. Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Kayalvizhi Jayavel
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Thulasi Rajalakshmi
- Department of Electronics and Communication Engineering, School of Electrical and Electronics Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| |
Collapse
|
27
|
Warin K, Limprasert W, Suebnukarn S, Jinaporntham S, Jantana P, Vicharueang S. AI-based analysis of oral lesions using novel deep convolutional neural networks for early detection of oral cancer. PLoS One 2022; 17:e0273508. [PMID: 36001628 PMCID: PMC9401150 DOI: 10.1371/journal.pone.0273508] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 08/09/2022] [Indexed: 11/18/2022] Open
Abstract
Artificial intelligence (AI) applications in oncology have been developed rapidly with reported successes in recent years. This work aims to evaluate the performance of deep convolutional neural network (CNN) algorithms for the classification and detection of oral potentially malignant disorders (OPMDs) and oral squamous cell carcinoma (OSCC) in oral photographic images. A dataset comprising 980 oral photographic images was divided into 365 images of OSCC, 315 images of OPMDs and 300 images of non-pathological images. Multiclass image classification models were created by using DenseNet-169, ResNet-101, SqueezeNet and Swin-S. Multiclass object detection models were fabricated by using faster R-CNN, YOLOv5, RetinaNet and CenterNet2. The AUC of multiclass image classification of the best CNN models, DenseNet-196, was 1.00 and 0.98 on OSCC and OPMDs, respectively. The AUC of the best multiclass CNN-base object detection models, Faster R-CNN, was 0.88 and 0.64 on OSCC and OPMDs, respectively. In comparison, DenseNet-196 yielded the best multiclass image classification performance with AUC of 1.00 and 0.98 on OSCC and OPMD, respectively. These values were inline with the performance of experts and superior to those of general practictioners (GPs). In conclusion, CNN-based models have potential for the identification of OSCC and OPMDs in oral photographic images and are expected to be a diagnostic tool to assist GPs for the early detection of oral cancer.
Collapse
Affiliation(s)
- Kritsasith Warin
- Faculty of Dentistry, Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | - Wasit Limprasert
- College of Interdisciplinary Studies, Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | - Siriwan Suebnukarn
- Faculty of Dentistry, Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | | | | | | |
Collapse
|
28
|
Kawahara D, Murakami Y, Tani S, Nagata Y. A prediction model for pathological findings after neoadjuvant chemoradiotherapy for resectable locally advanced esophageal squamous cell carcinoma based on endoscopic images using deep learning. Br J Radiol 2022; 95:20210934. [PMID: 35451338 PMCID: PMC10996327 DOI: 10.1259/bjr.20210934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 03/28/2022] [Accepted: 04/01/2022] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES To propose deep-learning (DL)-based predictive model for pathological complete response rate for resectable locally advanced esophageal squamous cell carcinoma (SCC) after neoadjuvant chemoradiotherapy (NCRT) with endoscopic images. METHODS AND MATERIAL This retrospective study analyzed 98 patients with locally advanced esophagus cancer treated by preoperative chemoradiotherapy followed by surgery from 2004 to 2016. The patient data were split into two sets: 72 patients for the training of models and 26 patients for testing of the model. Patients was classified into two groups with the LC (Group I: responder and Group II: non-responder). The scanned images were converted into joint photographic experts group (JPEG) format and resized to 150 × 150 pixels. The input image without imaging filter (w/o filter) and with Laplacian, Sobel, and wavelet imaging filters deep-learning model to predict the pathological CR with a convolution neural network (CNN). The accuracy, sensitivity, and specificity, the area under the curve (AUC) of the receiver operating characteristic were evaluated. RESULTS The average of accuracy for the cross-validation was 0.64 for w/o filter, 0.69 for Laplacian filter, 0.71 for Sobel filter, and 0.81 for wavelet filter, respectively. The average of sensitivity for the cross-validation was 0.80 for w/o filter, 0.81 for Laplacian filter, 0.67 for Sobel filter, and 0.80 for wavelet filter, respectively. The average of specificity for the cross-validation was 0.37 for w/o filter, 0.55 for Laplacian filter, 0.68 for Sobel filter, and 0.81 for wavelet filter, respectively. From the ROC curve, the average AUC for the cross-validation was 0.58 for w/o filter, 0.67 for Laplacian filter, 0.73 for Sobel filter, and 0.83 for wavelet filter, respectively. CONCLUSIONS The current study proposed the improvement the accuracy of the DL-based prediction model with the imaging filters. With the imaging filters, the accuracy was significantly improved. The model can be supported to assist clinical oncologists to have a more accurate expectations of the treatment outcome. ADVANCES IN KNOWLEDGE The accuracy of the prediction for the local control after radiotherapy can improve with the input image with the imaging filter for deep learning.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Radiation Oncology, Graduate School of Biomedical
Health Sciences, Hiroshima University,
Hiroshima, Japan
| | - Yuji Murakami
- Department of Radiation Oncology, Graduate School of Biomedical
Health Sciences, Hiroshima University,
Hiroshima, Japan
| | - Shigeyuki Tani
- School of Medicine, Hiroshima University,
Hiroshima, Japan
| | - Yasushi Nagata
- Department of Radiation Oncology, Graduate School of Biomedical
Health Sciences, Hiroshima University,
Hiroshima, Japan
- Hiroshima High-Precision Radiotherapy Cancer
Center, Hiroshima,
Japan
| |
Collapse
|
29
|
Luo D, Kuang F, Du J, Zhou M, Liu X, Luo X, Tang Y, Li B, Su S. Artificial Intelligence-Assisted Endoscopic Diagnosis of Early Upper Gastrointestinal Cancer: A Systematic Review and Meta-Analysis. Front Oncol 2022; 12:855175. [PMID: 35756602 PMCID: PMC9229174 DOI: 10.3389/fonc.2022.855175] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/28/2022] [Indexed: 11/17/2022] Open
Abstract
Objective The aim of this study was to assess the diagnostic ability of artificial intelligence (AI) in the detection of early upper gastrointestinal cancer (EUGIC) using endoscopic images. Methods Databases were searched for studies on AI-assisted diagnosis of EUGIC using endoscopic images. The pooled area under the curve (AUC), sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR) with 95% confidence interval (CI) were calculated. Results Overall, 34 studies were included in our final analysis. Among the 17 image-based studies investigating early esophageal cancer (EEC) detection, the pooled AUC, sensitivity, specificity, PLR, NLR, and DOR were 0.98, 0.95 (95% CI, 0.95–0.96), 0.95 (95% CI, 0.94–0.95), 10.76 (95% CI, 7.33–15.79), 0.07 (95% CI, 0.04–0.11), and 173.93 (95% CI, 81.79–369.83), respectively. Among the seven patient-based studies investigating EEC detection, the pooled AUC, sensitivity, specificity, PLR, NLR, and DOR were 0.98, 0.94 (95% CI, 0.91–0.96), 0.90 (95% CI, 0.88–0.92), 6.14 (95% CI, 2.06–18.30), 0.07 (95% CI, 0.04–0.11), and 69.13 (95% CI, 14.73–324.45), respectively. Among the 15 image-based studies investigating early gastric cancer (EGC) detection, the pooled AUC, sensitivity, specificity, PLR, NLR, and DOR were 0.94, 0.87 (95% CI, 0.87–0.88), 0.88 (95% CI, 0.87–0.88), 7.20 (95% CI, 4.32–12.00), 0.14 (95% CI, 0.09–0.23), and 48.77 (95% CI, 24.98–95.19), respectively. Conclusions On the basis of our meta-analysis, AI exhibited high accuracy in diagnosis of EUGIC. Systematic Review Registration https://www.crd.york.ac.uk/PROSPERO/, identifier PROSPERO (CRD42021270443).
Collapse
Affiliation(s)
- De Luo
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Fei Kuang
- Department of General Surgery, Changhai Hospital of The Second Military Medical University, Shanghai, China
| | - Juan Du
- Department of Clinical Medicine, Southwest Medical University, Luzhou, China
| | - Mengjia Zhou
- Department of Ultrasound, Seventh People's Hospital of Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Xiangdong Liu
- Department of Hepatobiliary Surgery, Zigong Fourth People's Hospital, Zigong, China
| | - Xinchen Luo
- Department of Gastroenterology, Zigong Third People's Hospital, Zigong, China
| | - Yong Tang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Bo Li
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Song Su
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| |
Collapse
|
30
|
Shen S, Xie Y, Ju P, Li W, Zhang J, Cai R, Li R. Predictive effect of J waves on cardiac compression and clinical prognosis of esophageal tumors: a retrospective study. J Gastrointest Oncol 2022; 13:923-934. [PMID: 35837153 DOI: 10.21037/jgo-22-371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 05/27/2022] [Indexed: 11/06/2022] Open
Abstract
Background The J wave syndromes (JWS) could be observed in patients with mediastinal tumors, though few studies have verified the statistical correlation between J waves and cardiac compression by tumors. This study aimed to investigate the relationship between J waves and cardiac compression by esophageal tumor and to compare the prediction of J waves on clinical prognosis with that of cardiac compression by esophageal tumor. Methods We enrolled 273 patients (228 males, 45 females; mean 63.8±7.5 years) with esophageal tumors admitted to Shanghai Chest Hospital between August 2016 and November 2020. The J wave was defined as a J-point elevation of ≥0.1 mV in a 12-lead electrocardiogram (ECG) and classified into multiple types. Chest computed tomography (CT) was reviewed to clarify the anatomical relationship between the heart and the esophageal tumor. The prognosis of severe cardiac events and survival status were followed up through medical history, examination records and telephone records. Results J waves were present in 141 patients among all 273 cases. The sensitivity and specificity of cardiac compression by the tumor for J waves were 78.1% and 67.3%, respectively. The odds ratio (OR) of cardiac compression by the tumor to J waves was 7.33 [95% confidence interval (CI): 4.21-12.74; P<0.001]. The Kappa coefficient between J waves and cardiac compression was 0.44±0.05. The significance association between J waves and cardiac compression was independent from other clinical variables (P<0.001). Decreased J wave amplitude was correlated with the disappearance of cardiac compression during follow-up (P=0.03). Patients with J waves had a higher risk of severe cardiac events than those without J waves (OR =2.84, 95% CI: 1.22-6.63; P=0.01). During the follow-up period, we found that the presence of J waves [hazard ratio (HR) =2.28; 95% CI: 1.35-3.84; P=0.002] and cardiac compression by the tumor (HR =2.51; 95% CI: 1.51-4.17; P<0.001) were both negatively correlated with the survival time of patients. Conclusions The presence of J waves could be used as an effective mean to predict the mechanical impact of esophageal tumor on the heart, and played an important role in predicting the survival of patients.
Collapse
Affiliation(s)
- Songcui Shen
- Department of Cardiac Function, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Yichen Xie
- Department of Radiology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Pengliang Ju
- Department of Cardiac Function, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Wenzhao Li
- Department of Cardiac Function, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Jiayuan Zhang
- Department of Radiology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Ruxin Cai
- Department of Radiotherapy, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Ruogu Li
- Department of Cardiology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| |
Collapse
|
31
|
Abstract
Artificial intelligence (AI) is rapidly developing in various medical fields, and there is an increase in research performed in the field of gastrointestinal (GI) endoscopy. In particular, the advent of convolutional neural network, which is a class of deep learning method, has the potential to revolutionize the field of GI endoscopy, including esophagogastroduodenoscopy (EGD), capsule endoscopy (CE), and colonoscopy. A total of 149 original articles pertaining to AI (27 articles in esophagus, 30 articles in stomach, 29 articles in CE, and 63 articles in colon) were identified in this review. The main focuses of AI in EGD are cancer detection, identifying the depth of cancer invasion, prediction of pathological diagnosis, and prediction of Helicobacter pylori infection. In the field of CE, automated detection of bleeding sites, ulcers, tumors, and various small bowel diseases is being investigated. AI in colonoscopy has advanced with several patient-based prospective studies being conducted on the automated detection and classification of colon polyps. Furthermore, research on inflammatory bowel disease has also been recently reported. Most studies of AI in the field of GI endoscopy are still in the preclinical stages because of the retrospective design using still images. Video-based prospective studies are needed to advance the field. However, AI will continue to develop and be used in daily clinical practice in the near future. In this review, we have highlighted the published literature along with providing current status and insights into the future of AI in GI endoscopy.
Collapse
Affiliation(s)
- Yutaka Okagawa
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.,Department of Gastroenterology, Tonan Hospital, Sapporo, Japan
| | - Seiichiro Abe
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.
| | - Masayoshi Yamada
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Ichiro Oda
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| |
Collapse
|
32
|
Artificial Intelligence in the Management of Barrett’s Esophagus and Early Esophageal Adenocarcinoma. Cancers (Basel) 2022; 14:cancers14081918. [PMID: 35454824 PMCID: PMC9028107 DOI: 10.3390/cancers14081918] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 04/02/2022] [Accepted: 04/07/2022] [Indexed: 02/06/2023] Open
Abstract
Simple Summary Esophageal adenocarcinoma is increasing in incidence and is the most common subtype of esophageal cancer in Western societies. AI systems are currently under development and validation in many fields of gas-troenterology. Abstract Esophageal adenocarcinoma is increasing in incidence and is the most common subtype of esophageal cancer in Western societies. The stepwise progression of Barrett´s metaplasia to high-grade dysplasia and invasive adenocarcinoma provides an opportunity for screening and surveillance. There are important unresolved issues, which include (i) refining the definition of the screening population in order to avoid unnecessary invasive diagnostics, (ii) a more precise prediction of the (very heterogeneous) individual progression risk from metaplasia to invasive cancer in order to better tailor surveillance recommendations, (iii) improvement of the quality of endoscopy in order to reduce the high miss rate for early neoplastic lesions, and (iv) support for the diagnosis of tumor infiltration depth in order to guide treatment decisions. Artificial intelligence (AI) systems might be useful as a support to better solve the above-mentioned issues.
Collapse
|
33
|
Sharma P, Hassan C. Artificial Intelligence and Deep Learning for Upper Gastrointestinal Neoplasia. Gastroenterology 2022; 162:1056-1066. [PMID: 34902362 DOI: 10.1053/j.gastro.2021.11.040] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 11/09/2021] [Accepted: 11/19/2021] [Indexed: 12/24/2022]
Abstract
Upper gastrointestinal (GI) neoplasia account for 35% of GI cancers and 1.5 million cancer-related deaths every year. Despite its efficacy in preventing cancer mortality, diagnostic upper GI endoscopy is affected by a substantial miss rate of neoplastic lesions due to failure to recognize a visible lesion or imperfect navigation. This may be offset by the real-time application of artificial intelligence (AI) for detection (computer-aided detection [CADe]) and characterization (computer-aided diagnosis [CADx]) of upper GI neoplasia. Stand-alone performance of CADe for esophageal squamous cell neoplasia, Barrett's esophagus-related neoplasia, and gastric cancer showed promising accuracy, sensitivity ranging between 83% and 93%. However, incorporation of CADe/CADx in clinical practice depends on several factors, such as possible bias in the training or validation phases of these algorithms, its interaction with human endoscopists, and clinical implications of false-positive results. The aim of this review is to guide the clinician across the multiple steps of AI development in clinical practice.
Collapse
Affiliation(s)
- Prateek Sharma
- University of Kansas School of Medicine, Kansas City, Missouri; Kansas City Veterans Affairs Medical Center, Kansas City, Missouri
| | - Cesare Hassan
- Humanitas University, Department of Biomedical Sciences, Pieve Emanuele, Italy; Humanitas Clinical and Research Center-IRCCS, Endoscopy Unit, Rozzano, Italy.
| |
Collapse
|
34
|
Spadaccini M, Vespa E, Chandrasekar VT, Desai M, Patel HK, Maselli R, Fugazza A, Carrara S, Anderloni A, Franchellucci G, De Marco A, Hassan C, Bhandari P, Sharma P, Repici A. Advanced imaging and artificial intelligence for Barrett's esophagus: What we should and soon will do. World J Gastroenterol 2022; 28:1113-1122. [PMID: 35431503 PMCID: PMC8985480 DOI: 10.3748/wjg.v28.i11.1113] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 08/12/2021] [Accepted: 02/13/2022] [Indexed: 02/06/2023] Open
Abstract
Barrett’s esophagus (BE) is a well-established risk factor for esophageal adenocarcinoma. It is recommended that patients have regular endoscopic surveillance, with the ultimate goal of detecting early-stage neoplastic lesions before they can progress to invasive carcinoma. Detection of both dysplasia or early adenocarcinoma permits curative endoscopic treatments, and with this aim, thorough endoscopic assessment is crucial and improves outcomes. The burden of missed neoplasia in BE is still far from being negligible, likely due to inappropriate endoscopic surveillance. Over the last two decades, advanced imaging techniques, moving from traditional dye-spray chromoendoscopy to more practical virtual chromoendoscopy technologies, have been introduced with the aim to enhance neoplasia detection in BE. As witnessed in other fields, artificial intelligence (AI) has revolutionized the field of diagnostic endoscopy and is set to cover a pivotal role in BE as well. The aim of this commentary is to comprehensively summarize present evidence, recent research advances, and future perspectives regarding advanced imaging technology and AI in BE; the combination of computer-aided diagnosis to a widespread adoption of advanced imaging technologies is eagerly awaited. It will also provide a useful step-by-step approach for performing high-quality endoscopy in BE, in order to increase the diagnostic yield of endoscopy in clinical practice.
Collapse
Affiliation(s)
- Marco Spadaccini
- Department of Endoscopy, Humanitas Research Hospital, IRCCS, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| | - Edoardo Vespa
- Department of Endoscopy, Humanitas Research Hospital, IRCCS, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| | | | - Madhav Desai
- Department of Gastroenterology and Hepatology, Kansas City VA Medical Center, Kansas City, MO 66045, United States
| | - Harsh K Patel
- Department of Internal Medicine, Ochsner Clinic Foundation, New Orleans, LA 70124, United States
| | - Roberta Maselli
- Department of Endoscopy, Humanitas Research Hospital, IRCCS, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| | - Alessandro Fugazza
- Department of Endoscopy, Humanitas Research Hospital, IRCCS, Rozzano 20089, Italy
| | - Silvia Carrara
- Department of Endoscopy, Humanitas Research Hospital, IRCCS, Rozzano 20089, Italy
| | - Andrea Anderloni
- Department of Endoscopy, Humanitas Research Hospital, IRCCS, Rozzano 20089, Italy
| | - Gianluca Franchellucci
- Department of Endoscopy, Humanitas Research Hospital, IRCCS, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| | - Alessandro De Marco
- Department of Endoscopy, Humanitas Research Hospital, IRCCS, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| | - Cesare Hassan
- Endoscopy Unit, Nuovo Regina Margherita Hospital, Roma 00153, Italy
| | - Pradeep Bhandari
- Department of Gastroenterology, Portsmouth Hospitals University NHS Trust, Portsmouth PO6 3LY, United Kingdom
- School of Pharmacy and Biomedical Sciences, University of Portsmouth, Portsmouth PO6 3LY, United Kingdom
| | - Prateek Sharma
- Department of Gastroenterology and Hepatology, Kansas City VA Medical Center, Kansas City, MO 66045, United States
| | - Alessandro Repici
- Department of Endoscopy, Humanitas Research Hospital, IRCCS, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| |
Collapse
|
35
|
Visaggi P, Barberio B, Gregori D, Azzolina D, Martinato M, Hassan C, Sharma P, Savarino E, de Bortoli N. Systematic review with meta-analysis: artificial intelligence in the diagnosis of oesophageal diseases. Aliment Pharmacol Ther 2022; 55:528-540. [PMID: 35098562 PMCID: PMC9305819 DOI: 10.1111/apt.16778] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 07/09/2022] [Accepted: 01/09/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Artificial intelligence (AI) has recently been applied to endoscopy and questionnaires for the evaluation of oesophageal diseases (ODs). AIM We performed a systematic review with meta-analysis to evaluate the performance of AI in the diagnosis of malignant and benign OD. METHODS We searched MEDLINE, EMBASE, EMBASE Classic and the Cochrane Library. A bivariate random-effect model was used to calculate pooled diagnostic efficacy of AI models and endoscopists. The reference tests were histology for neoplasms and the clinical and instrumental diagnosis for gastro-oesophageal reflux disease (GERD). The pooled area under the summary receiver operating characteristic (AUROC), sensitivity, specificity, positive and negative likelihood ratio (PLR and NLR) and diagnostic odds ratio (DOR) were estimated. RESULTS For the diagnosis of Barrett's neoplasia, AI had AUROC of 0.90, sensitivity 0.89, specificity 0.86, PLR 6.50, NLR 0.13 and DOR 50.53. AI models' performance was comparable with that of endoscopists (P = 0.35). For the diagnosis of oesophageal squamous cell carcinoma, the AUROC, sensitivity, specificity, PLR, NLR and DOR were 0.97, 0.95, 0.92, 12.65, 0.05 and DOR 258.36, respectively. In this task, AI performed better than endoscopists although without statistically significant differences. In the detection of abnormal intrapapillary capillary loops, the performance of AI was: AUROC 0.98, sensitivity 0.94, specificity 0.94, PLR 14.75, NLR 0.07 and DOR 225.83. For the diagnosis of GERD based on questionnaires, the AUROC, sensitivity, specificity, PLR, NLR and DOR were 0.99, 0.97, 0.97, 38.26, 0.03 and 1159.6, respectively. CONCLUSIONS AI demonstrated high performance in the clinical and endoscopic diagnosis of OD.
Collapse
Affiliation(s)
- Pierfrancesco Visaggi
- Gastroenterology UnitDepartment of Translational Research and New Technologies in Medicine and SurgeryUniversity of PisaPisaItaly
| | - Brigida Barberio
- Division of GastroenterologyDepartment of Surgery, Oncology and GastroenterologyUniversity of PadovaPadovaItaly
| | - Dario Gregori
- Unit of Biostatistics, Epidemiology and Public HealthDepartment of Cardiac, Thoracic, Vascular Sciences and Public HealthUniversity of PadovaPadovaItaly
| | - Danila Azzolina
- Unit of Biostatistics, Epidemiology and Public HealthDepartment of Cardiac, Thoracic, Vascular Sciences and Public HealthUniversity of PadovaPadovaItaly
- Department of Medical ScienceUniversity of FerraraFerraraItaly
| | - Matteo Martinato
- Unit of Biostatistics, Epidemiology and Public HealthDepartment of Cardiac, Thoracic, Vascular Sciences and Public HealthUniversity of PadovaPadovaItaly
| | - Cesare Hassan
- Department of Biomedical Sciences, Humanitas UniversityVia Rita Levi Montalcini 420072 Pieve Emanuele, MilanItaly
- IRCCS Humanitas Research Hospitalvia Manzoni 5620089 Rozzano, MilanItaly
| | - Prateek Sharma
- University of Kansas School of Medicine and VA Medical CenterKansas CityMissouriUSA
| | - Edoardo Savarino
- Division of GastroenterologyDepartment of Surgery, Oncology and GastroenterologyUniversity of PadovaPadovaItaly
| | - Nicola de Bortoli
- Gastroenterology UnitDepartment of Translational Research and New Technologies in Medicine and SurgeryUniversity of PisaPisaItaly
| |
Collapse
|
36
|
van der Putten J, van der Sommen F. AIM in Barrett’s Esophagus. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
37
|
Li Q, Liu BR. Application of artificial intelligence-assisted endoscopic detection of early esophageal cancer. Shijie Huaren Xiaohua Zazhi 2021; 29:1389-1395. [DOI: 10.11569/wcjd.v29.i24.1389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
In recent years, artificial intelligence (AI) combined with endoscopy has made an appearance in the diagnosis of early esophageal cancer (EC) and achieved satisfactory results. Due to the rapid progression and poor prognosis of EC, the early detection and diagnosis of EC are of great value for patient prognosis improvement. AI has been applied in the screening of early EC and has shown advantages; notably, it is more accurate than less-experienced endoscopists. In China, the detection of early EC depends on endoscopist expertise and is inevitably subject to interobserver variability. The excellent imaging recognition ability of AI is very suitable for the diagnosis and recognition of EC, thereby reducing the missed diagnosis and helping physicians to perform endoscopy better. This paper reviews the application and relevant progress of AI in the field of endoscopic detection of early EC (including squamous cell carcinoma and adenocarcinoma), with a focus on diagnostic performance of AI to identify different types of endoscopic images, such as sensitivity and specificity.
Collapse
Affiliation(s)
- Qing Li
- Department of Gastroenterology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450000, Henan Province, China
| | - Bing-Rong Liu
- Department of Gastroenterology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450000, Henan Province, China
| |
Collapse
|
38
|
Kato S, Amemiya S, Takao H, Yamashita H, Sakamoto N, Abe O. Automated detection of brain metastases on non-enhanced CT using single-shot detectors. Neuroradiology 2021; 63:1995-2004. [PMID: 34114064 DOI: 10.1007/s00234-021-02743-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 05/30/2021] [Indexed: 12/23/2022]
Abstract
PURPOSE To develop and investigate deep learning-based detectors for brain metastases detection on non-enhanced (NE) CT. METHODS The study included 116 NECTs from 116 patients (81 men, age 66.5 ± 10.6 years) to train and test single-shot detector (SSD) models using 89 and 27 cases, respectively. The annotation was performed by three radiologists using bounding-boxes defined on contrast-enhanced CT (CECT) images. NECTs were coregistered and resliced to CECTs. The detection performance was evaluated at the SSD's 50% confidence threshold using sensitivity, positive-predictive value (PPV), and the false-positive rate per scan (FPR). For false negatives and true positives, binary logistic regression was used to examine the possible contributing factors. RESULTS For lesions 6 mm or larger, the SSD achieved a sensitivity of 35.4% (95% confidence interval (CI): [32.3%, 33.5%]); 51/144) with an FPR of 14.9 (95% CI [12.4, 13.9]). The overall sensitivity was 23.8% (95% CI: [21.3%, 22.8%]; 55/231) and PPV was 19.1% (95% CI: [18.5%, 20.4%]; 98/ of 513), with an FPR of 15.4 (95% CI [12.9, 14.5]). Ninety-five percent of the lesions that SSD failed to detect were also undetectable to radiologists (168/176). Twenty-four percent of the lesions (13/50) detected by the SSD were undetectable to radiologists. Logistic regression analysis indicated that density, necrosis, and size contributed to the lesions' visibility for radiologists, while for the SSD, the surrounding edema also enhanced the detection performance. CONCLUSION The SSD model we developed could detect brain metastases larger than 6 mm to some extent, a quarter of which were even retrospectively unrecognizable to radiologists.
Collapse
Affiliation(s)
- Shimpei Kato
- Department of Radiology, The Graduate School of Medicine, University of Tokyo, 7‑3‑1 Hongo, Bunkyo‑ku, Tokyo, 113‑8655, Japan
| | - Shiori Amemiya
- Department of Radiology, The Graduate School of Medicine, University of Tokyo, 7‑3‑1 Hongo, Bunkyo‑ku, Tokyo, 113‑8655, Japan.
| | - Hidemasa Takao
- Department of Radiology, The Graduate School of Medicine, University of Tokyo, 7‑3‑1 Hongo, Bunkyo‑ku, Tokyo, 113‑8655, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Mizonokuchi, 5-1-1 Futago, Takatsu-ku, Kawasaki, Kanagawa, 213-8507, Japan
| | - Naoya Sakamoto
- Department of Radiology, The Graduate School of Medicine, University of Tokyo, 7‑3‑1 Hongo, Bunkyo‑ku, Tokyo, 113‑8655, Japan
| | - Osamu Abe
- Department of Radiology, The Graduate School of Medicine, University of Tokyo, 7‑3‑1 Hongo, Bunkyo‑ku, Tokyo, 113‑8655, Japan
| |
Collapse
|
39
|
Wang J, Jin Y, Cai S, Xu H, Heng PA, Qin J, Wang L. Real-time landmark detection for precise endoscopic submucosal dissection via shape-aware relation network. Med Image Anal 2021; 75:102291. [PMID: 34753019 DOI: 10.1016/j.media.2021.102291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 10/22/2021] [Accepted: 10/25/2021] [Indexed: 10/19/2022]
Abstract
We propose a novel shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection (ESD) surgery. This task is of great clinical significance but extremely challenging due to bleeding, lighting reflection, and motion blur in the complicated surgical environment. Compared with existing solutions, which either neglect geometric relationships among targeting objects or capture the relationships by using complicated aggregation schemes, the proposed network is capable of achieving satisfactory accuracy while maintaining real-time performance by taking full advantage of the spatial relations among landmarks. We first devise an algorithm to automatically generate relation keypoint heatmaps, which are able to intuitively represent the prior knowledge of spatial relations among landmarks without using any extra manual annotation efforts. We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process. While one scheme introduces pixel-level regularization by multi-task learning, the other integrates global-level regularization by harnessing a newly designed grouped consistency evaluator, which adds relation constraints to the proposed network in an adversarial manner. Both schemes are beneficial to the model in training, and can be readily unloaded in inference to achieve real-time detection. We establish a large in-house dataset of ESD surgery for esophageal cancer to validate the effectiveness of our proposed method. Extensive experimental results demonstrate that our approach outperforms state-of-the-art methods in terms of accuracy and efficiency, achieving better detection results faster. Promising results on two downstream applications further corroborate the great potential of our method in ESD clinical practice.
Collapse
Affiliation(s)
- Jiacheng Wang
- Department of Computer Science at School of Informatics, Xiamen University, Xiamen 361005, China
| | - Yueming Jin
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - Shuntian Cai
- Department of Gastroenterology, Zhongshan Hospital affiliated to Xiamen University, Xiamen, China
| | - Hongzhi Xu
- Department of Gastroenterology, Zhongshan Hospital affiliated to Xiamen University, Xiamen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - Jing Qin
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Liansheng Wang
- Department of Computer Science at School of Informatics, Xiamen University, Xiamen 361005, China.
| |
Collapse
|
40
|
Yang H, Hu B. Early gastrointestinal cancer: The application of artificial intelligence. Artif Intell Gastrointest Endosc 2021; 2:185-197. [DOI: 10.37126/aige.v2.i4.185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 06/25/2021] [Accepted: 08/18/2021] [Indexed: 02/06/2023] Open
Abstract
Early gastrointestinal (GI) cancer has been the core of clinical endoscopic work. Its early detection and treatment are tightly associated with patients’ prognoses. As a novel technology, artificial intelligence has been improved and applied in the field of endoscopy. Studies on detection, diagnosis, risk, and prognosis evaluation of diseases in the GI tract have been in development, including precancerous lesions, adenoma, early GI cancers, and advanced GI cancers. In this review, research on esophagus, stomach, and colon was concluded, and associated with the process from precancerous lesions to early GI cancer, such as from Barrett’s esophagus to early esophageal cancer, from dysplasia to early gastric cancer, and from adenoma to early colonic cancer. A status quo of research on early GI cancers and artificial intelligence was provided.
Collapse
Affiliation(s)
- Hang Yang
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Bing Hu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| |
Collapse
|
41
|
Gao X, Braden B. Artificial intelligence in endoscopy: The challenges and future directions. Artif Intell Gastrointest Endosc 2021; 2:117-126. [DOI: 10.37126/aige.v2.i4.117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/20/2021] [Accepted: 07/15/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence based approaches, in particular deep learning, have achieved state-of-the-art performance in medical fields with increasing number of software systems being approved by both Europe and United States. This paper reviews their applications to early detection of oesophageal cancers with a focus on their advantages and pitfalls. The paper concludes with future recommendations towards the development of a real-time, clinical implementable, interpretable and robust diagnosis support systems.
Collapse
Affiliation(s)
- Xiaohong Gao
- Department of Computer Science, Middlesex University, London NW4 4BT, United Kingdom
| | - Barbara Braden
- Translational Gastroenterology Unit, Oxford University Hospitals NHS Foundation Trust, Oxford OX3 9DU, United Kingdom
| |
Collapse
|
42
|
Warin K, Limprasert W, Suebnukarn S, Jinaporntham S, Jantana P. Automatic classification and detection of oral cancer in photographic images using deep learning algorithms. J Oral Pathol Med 2021; 50:911-918. [PMID: 34358372 DOI: 10.1111/jop.13227] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 06/14/2021] [Accepted: 07/04/2021] [Indexed: 12/15/2022]
Abstract
BACKGROUND Oral cancer is a deadly disease among the most common malignant tumors worldwide, and it has become an increasingly important public health problem in developing and low-to-middle income countries. This study aims to use the convolutional neural network (CNN) deep learning algorithms to develop an automated classification and detection model for oral cancer screening. METHODS The study included 700 clinical oral photographs, collected retrospectively from the oral and maxillofacial center, which were divided into 350 images of oral squamous cell carcinoma and 350 images of normal oral mucosa. The classification and detection models were created by using DenseNet121 and faster R-CNN, respectively. Four hundred and ninety images were randomly selected as training data. In addition, 70 and 140 images were assigned as validating and testing data, respectively. RESULTS The classification accuracy of DenseNet121 model achieved a precision of 99%, a recall of 100%, an F1 score of 99%, a sensitivity of 98.75%, a specificity of 100%, and an area under the receiver operating characteristic curve of 99%. The detection accuracy of a faster R-CNN model achieved a precision of 76.67%, a recall of 82.14%, an F1 score of 79.31%, and an area under the precision-recall curve of 0.79. CONCLUSION The DenseNet121 and faster R-CNN algorithm were proved to offer the acceptable potential for classification and detection of cancerous lesions in oral photographic images.
Collapse
Affiliation(s)
- Kritsasith Warin
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand
| | - Wasit Limprasert
- College of Interdisciplinary Studies, Thammasat University, Patum Thani, Thailand
| | | | - Suthin Jinaporntham
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Khon Kaen University, Khon Kaen, Thailand
| | | |
Collapse
|
43
|
Bang CS. [Deep Learning in Upper Gastrointestinal Disorders: Status and Future Perspectives]. THE KOREAN JOURNAL OF GASTROENTEROLOGY 2021; 75:120-131. [PMID: 32209800 DOI: 10.4166/kjg.2020.75.3.120] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 03/01/2020] [Accepted: 03/02/2020] [Indexed: 12/18/2022]
Abstract
Artificial intelligence using deep learning has been applied to gastrointestinal disorders for the detection, classification, and delineation of various lesion images. With the accumulation of enormous medical records, the evolution of computation power with graphic processing units, and the widespread use of open-source libraries in large-scale machine learning processes, medical artificial intelligence is overcoming its traditional limitations. This paper explains the basic concepts of deep learning model establishment and summarizes previous studies on upper gastrointestinal disorders. The limitations and perspectives on future development are also discussed.
Collapse
Affiliation(s)
- Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Korea
| |
Collapse
|
44
|
Bang CS, Lee JJ, Baik GH. Computer-aided diagnosis of esophageal cancer and neoplasms in endoscopic images: a systematic review and meta-analysis of diagnostic test accuracy. Gastrointest Endosc 2021; 93:1006-1015.e13. [PMID: 33290771 DOI: 10.1016/j.gie.2020.11.025] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 11/20/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS Diagnosis of esophageal cancer or precursor lesions by endoscopic imaging depends on endoscopist expertise and is inevitably subject to interobserver variability. Studies on computer-aided diagnosis (CAD) using deep learning or machine learning are on the increase. However, studies with small sample sizes are limited by inadequate statistical strength. Here, we used a meta-analysis to evaluate the diagnostic test accuracy (DTA) of CAD algorithms of esophageal cancers or neoplasms using endoscopic images. METHODS Core databases were searched for studies based on endoscopic imaging using CAD algorithms for the diagnosis of esophageal cancer or neoplasms and presenting data on diagnostic performance, and a systematic review and DTA meta-analysis were performed. RESULTS Overall, 21 and 19 studies were included in the systematic review and DTA meta-analysis, respectively. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD algorithms for the diagnosis of esophageal cancer or neoplasms in the image-based analysis were 0.97 (95% confidence interval [CI], 0.95-0.99), 0.94 (95% CI, 0.89-0.96), 0.88 (95% CI, 0.76-0.94), and 108 (95% CI, 43-273), respectively. Meta-regression showed no heterogeneity, and no publication bias was detected. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD algorithms for the diagnosis of esophageal cancer invasion depth were 0.96 (95% CI, 0.86-0.99), 0.90 (95% CI, 0.88-0.92), 0.88 (95% CI, 0.83-0.91), and 138 (95% CI, 12-1569), respectively. CONCLUSIONS CAD algorithms showed high accuracy for the automatic endoscopic diagnosis of esophageal cancer and neoplasms. The limitation of a lack in performance in external validation and clinical applications should be overcome.
Collapse
Affiliation(s)
- Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Korea; Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Korea; Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Korea; Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Korea; Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon, Korea; Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon, Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Korea; Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Korea
| |
Collapse
|
45
|
Pang X, Zhao Z, Weng Y. The Role and Impact of Deep Learning Methods in Computer-Aided Diagnosis Using Gastrointestinal Endoscopy. Diagnostics (Basel) 2021; 11:694. [PMID: 33919669 PMCID: PMC8069844 DOI: 10.3390/diagnostics11040694] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 03/24/2021] [Accepted: 04/01/2021] [Indexed: 12/18/2022] Open
Abstract
At present, the application of artificial intelligence (AI) based on deep learning in the medical field has become more extensive and suitable for clinical practice compared with traditional machine learning. The application of traditional machine learning approaches to clinical practice is very challenging because medical data are usually uncharacteristic. However, deep learning methods with self-learning abilities can effectively make use of excellent computing abilities to learn intricate and abstract features. Thus, they are promising for the classification and detection of lesions through gastrointestinal endoscopy using a computer-aided diagnosis (CAD) system based on deep learning. This study aimed to address the research development of a CAD system based on deep learning in order to assist doctors in classifying and detecting lesions in the stomach, intestines, and esophagus. It also summarized the limitations of the current methods and finally presented a prospect for future research.
Collapse
Affiliation(s)
- Xuejiao Pang
- School of Control Science and Engineering, Shandong University, Jinan 250061, China;
| | - Zijian Zhao
- School of Control Science and Engineering, Shandong University, Jinan 250061, China;
| | - Ying Weng
- School of Computer Science, University of Nottingham, Nottingham NG7 2RD, UK;
| |
Collapse
|
46
|
Liu Y. Artificial intelligence-assisted endoscopic detection of esophageal neoplasia in early stage: The next step? World J Gastroenterol 2021; 27:1392-1405. [PMID: 33911463 PMCID: PMC8047537 DOI: 10.3748/wjg.v27.i14.1392] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 02/23/2021] [Accepted: 03/13/2021] [Indexed: 02/06/2023] Open
Abstract
Esophageal cancer (EC) is a common malignant tumor of the digestive tract and originates from the epithelium of the esophageal mucosa. It has been confirmed that early EC lesions can be cured by endoscopic therapy, and the curative effect is equivalent to that of surgical operation. Upper gastrointestinal endoscopy is still the gold standard for EC diagnosis. The accuracy of endoscopic examination results largely depends on the professional level of the examiner. Artificial intelligence (AI) has been applied in the screening of early EC and has shown advantages; notably, it is more accurate than less-experienced endoscopists. This paper reviews the application of AI in the field of endoscopic detection of early EC, including squamous cell carcinoma and adenocarcinoma, and describes the relevant progress. Although up to now most of the studies evaluating the clinical application of AI in early EC endoscopic detection are focused on still images, AI-assisted real-time detection based on live-stream video may be the next step.
Collapse
Affiliation(s)
- Yong Liu
- Department of Thoracic Surgery, The Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430011, Hubei Province, China
| |
Collapse
|
47
|
Attallah O, Sharkas M. GASTRO-CADx: a three stages framework for diagnosing gastrointestinal diseases. PeerJ Comput Sci 2021; 7:e423. [PMID: 33817058 PMCID: PMC7959662 DOI: 10.7717/peerj-cs.423] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/11/2021] [Indexed: 05/04/2023]
Abstract
Gastrointestinal (GI) diseases are common illnesses that affect the GI tract. Diagnosing these GI diseases is quite expensive, complicated, and challenging. A computer-aided diagnosis (CADx) system based on deep learning (DL) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis. Therefore, this article proposes a CADx system called Gastro-CADx to classify several GI diseases using DL techniques. Gastro-CADx involves three progressive stages. Initially, four different CNNs are used as feature extractors to extract spatial features. Most of the related work based on DL approaches extracted spatial features only. However, in the following phase of Gastro-CADx, features extracted in the first stage are applied to the discrete wavelet transform (DWT) and the discrete cosine transform (DCT). DCT and DWT are used to extract temporal-frequency and spatial-frequency features. Additionally, a feature reduction procedure is performed in this stage. Finally, in the third stage of the Gastro-CADx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the CADx and select the best-fused feature set. Two datasets referred to as Dataset I and II are utilized to evaluate the performance of Gastro-CADx. Results indicated that Gastro-CADx has achieved an accuracy of 97.3% and 99.7% for Dataset I and II respectively. The results were compared with recent related works. The comparison showed that the proposed approach is capable of classifying GI diseases with higher accuracy compared to other work. Thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. It can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| | - Maha Sharkas
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
48
|
Ghatwary N, Zolgharni M, Janan F, Ye X. Learning Spatiotemporal Features for Esophageal Abnormality Detection From Endoscopic Videos. IEEE J Biomed Health Inform 2021; 25:131-142. [PMID: 32750901 DOI: 10.1109/jbhi.2020.2995193] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Esophageal cancer is categorized as a type of disease with a high mortality rate. Early detection of esophageal abnormalities (i.e. precancerous and early cancerous) can improve the survival rate of the patients. Recent deep learning-based methods for selected types of esophageal abnormality detection from endoscopic images have been proposed. However, no methods have been introduced in the literature to cover the detection from endoscopic videos, detection from challenging frames and detection of more than one esophageal abnormality type. In this paper, we present an efficient method to automatically detect different types of esophageal abnormalities from endoscopic videos. We propose a novel 3D Sequential DenseConvLstm network that extracts spatiotemporal features from the input video. Our network incorporates 3D Convolutional Neural Network (3DCNN) and Convolutional Lstm (ConvLstm) to efficiently learn short and long term spatiotemporal features. The generated feature map is utilized by a region proposal network and ROI pooling layer to produce a bounding box that detects abnormality regions in each frame throughout the video. Finally, we investigate a post-processing method named Frame Search Conditional Random Field (FS-CRF) that improves the overall performance of the model by recovering the missing regions in neighborhood frames within the same clip. We extensively validate our model on an endoscopic video dataset that includes a variety of esophageal abnormalities. Our model achieved high performance using different evaluation metrics showing 93.7% recall, 92.7% precision, and 93.2% F-measure. Moreover, as no results have been reported in the literature for the esophageal abnormality detection from endoscopic videos, to validate the robustness of our model, we have tested the model on a publicly available colonoscopy video dataset, achieving the polyp detection performance in a recall of 81.18%, precision of 96.45% and F-measure 88.16%, compared to the state-of-the-art results of 78.84% recall, 90.51% precision and 84.27% F-measure using the same dataset. This demonstrates that the proposed method can be adapted to different gastrointestinal endoscopic video applications with a promising performance.
Collapse
|
49
|
van der Putten J, van der Sommen F. AIM in Barrett’s Esophagus. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_166-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
50
|
Iwagami H, Ishihara R, Aoyama K, Fukuda H, Shimamoto Y, Kono M, Nakahira H, Matsuura N, Shichijo S, Kanesaka T, Kanzaki H, Ishii T, Nakatani Y, Tada T. Artificial intelligence for the detection of esophageal and esophagogastric junctional adenocarcinoma. J Gastroenterol Hepatol 2021; 36:131-136. [PMID: 32511793 DOI: 10.1111/jgh.15136] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 05/03/2020] [Accepted: 06/05/2020] [Indexed: 12/12/2022]
Abstract
BACKGROUND AND AIM Conventional endoscopy for the early detection of esophageal and esophagogastric junctional adenocarcinoma (E/J cancer) is limited because early lesions are asymptomatic, and the associated changes in the mucosa are subtle. There are no reports on artificial intelligence (AI) diagnosis for E/J cancer from Asian countries. Therefore, we aimed to develop a computerized image analysis system using deep learning for the detection of E/J cancers. METHODS A total of 1172 images from 166 pathologically proven superficial E/J cancer cases and 2271 images of normal mucosa in esophagogastric junctional from 219 cases were used as the training image data. A total of 232 images from 36 cancer cases and 43 non-cancerous cases were used as the validation test data. The same validation test data were diagnosed by 15 board-certified specialists (experts). RESULTS The sensitivity, specificity, and accuracy of the AI system were 94%, 42%, and 66%, respectively, and that of the experts were 88%, 43%, and 63%, respectively. The sensitivity of the AI system was favorable, while its specificity for non-cancerous lesions was similar to that of the experts. Interobserver agreement among the experts for detecting superficial E/J was fair (Fleiss' kappa = 0.26, z = 20.4, P < 0.001). CONCLUSIONS Our AI system achieved high sensitivity and acceptable specificity for the detection of E/J cancers and may be a good supporting tool for the screening of E/J cancers.
Collapse
Affiliation(s)
- Hiroyoshi Iwagami
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Ryu Ishihara
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | | | - Hiromu Fukuda
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Yusaku Shimamoto
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Mitsuhiro Kono
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Hiroko Nakahira
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Noriko Matsuura
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Satoki Shichijo
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Takashi Kanesaka
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Hiromitsu Kanzaki
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Tatsuya Ishii
- Center for Gastroenterology, Teine Keijinkai Hospital, Sapporo, Japan
| | - Yasuki Nakatani
- Department of Gastroenterology and Hepatology, Japanese Red Cross Society Wakayama Medical Center, Wakayama, Japan
| | - Tomohiro Tada
- Engineering, AI Medical Service Inc., Tokyo, Japan.,Department of Gastroenterology, Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
| |
Collapse
|