1
|
Akgun E, Uysal M, Avci SN, Berber E. The use of artificial intelligence to detect parathyroid tissue on ex vivo specimens during thyroidectomy and parathyroidectomy procedures using near-infrared autofluorescence signals. Surgery 2024; 176:1396-1401. [PMID: 39147664 DOI: 10.1016/j.surg.2024.07.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Revised: 05/17/2024] [Accepted: 07/16/2024] [Indexed: 08/17/2024]
Abstract
BACKGROUND In thyroidectomy and parathyroidectomy procedures, diagnostic dilemmas related to whether an index tissue is of parathyroid or nonparathyroid origin frequently arise. Current options of frozen section and parathyroid aspiration are time-consuming. Parathyroid glands appear brighter than surrounding tissues on near-infrared autofluorescence imaging. The aim of this study was to develop an artificial intelligence model differentiating parathyroid tissue on surgical specimens based on near-infrared autofluorescence. METHODS With institutional review board approval, an image library of ex vivo specimens obtained in thyroidectomy and parathyroidectomy procedures was created between November 2019 and April 2023 at a single academic center. Ex vivo autofluorescence images of surgically removed parathyroid glands, thyroid glands, lymph nodes, and thymic tissue were uploaded into an artificial intelligence platform. Two different models were trained, with the first model using autofluorescence images from all specimens, including thyroid, and the second model excluding thyroid, to prevent the effect of specimen size on the results. Deep-learning models were trained to detect autofluorescence signals specific to parathyroid glands. Randomly chosen 80% of data were used for training, 10% for validation, and 10% for testing. Recall, precision, and area under the curve of models were calculated. RESULTS Surgical procedures included 377 parathyroidectomies, 239 total thyroidectomies, 97 thyroid lobectomies, and 32 central neck dissections. For the development of the model, 1151 images from a total of 678 procedures were used. The dataset comprised 648 parathyroid, 379 thyroid, 104 lymph node, and 20 thymic tissue images. The overall precision, recall, and area under the curve of the model to detect parathyroid tissue were 96.5%, 96.5%, and 0.985, respectively. False negatives were related to dark and large parathyroid glands. CONCLUSION The visual deep-learning model developed to identify parathyroid tissue in ex vivo specimens during thyroidectomy and parathyroidectomy demonstrated a high sensitivity and positive predictive value. This suggests potential utility of near-infrared autofluorescence imaging to improve intraoperative efficiency by reducing the need for frozen sections and parathyroid hormone aspirations to confirm parathyroid tissue.
Collapse
Affiliation(s)
- Ege Akgun
- Department of Endocrine Surgery, Endocrinology and Metabolism Institute, Cleveland Clinic, OH
| | - Melis Uysal
- Department of Endocrine Surgery, Endocrinology and Metabolism Institute, Cleveland Clinic, OH
| | - Seyma Nazli Avci
- Department of Endocrine Surgery, Endocrinology and Metabolism Institute, Cleveland Clinic, OH
| | - Eren Berber
- Department of Endocrine Surgery, Endocrinology and Metabolism Institute, Cleveland Clinic, OH.
| |
Collapse
|
2
|
Mota J, João Almeida M, Mendes F, Martins M, Ribeiro T, Afonso J, Cardoso P, Cardoso H, Andrade P, Ferreira J, Macedo G, Mascarenhas M. A Comprehensive Review of Artificial Intelligence and Colon Capsule Endoscopy: Opportunities and Challenges. Diagnostics (Basel) 2024; 14:2072. [PMID: 39335751 PMCID: PMC11431528 DOI: 10.3390/diagnostics14182072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2024] [Revised: 08/28/2024] [Accepted: 08/30/2024] [Indexed: 09/30/2024] Open
Abstract
Colon capsule endoscopy (CCE) enables a comprehensive, non-invasive, and painless evaluation of the colon, although it still has limited indications. The lengthy reading times hinder its wider implementation, a drawback that could potentially be overcome through the integration of artificial intelligence (AI) models. Studies employing AI, particularly convolutional neural networks (CNNs), demonstrate great promise in using CCE as a viable option for detecting certain diseases and alterations in the colon, compared to other methods like colonoscopy. Additionally, employing AI models in CCE could pave the way for a minimally invasive panenteric or even panendoscopic solution. This review aims to provide a comprehensive summary of the current state-of-the-art of AI in CCE while also addressing the challenges, both technical and ethical, associated with broadening indications for AI-powered CCE. Additionally, it also gives a brief reflection of the potential environmental advantages of using this method compared to alternative ones.
Collapse
Affiliation(s)
- Joana Mota
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Maria João Almeida
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Francisco Mendes
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Miguel Martins
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Tiago Ribeiro
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - João Afonso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Helder Cardoso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal
| | - Patricia Andrade
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal
| | - João Ferreira
- Department of Mechanical Engineering, Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
- Digestive Artificial Intelligence Development, 4200-135 Porto, Portugal
| | - Guilherme Macedo
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal
| | - Miguel Mascarenhas
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal
- ManopH Gastroenterology Clinic, 4000-432 Porto, Portugal
| |
Collapse
|
3
|
Wan JJ, Zhu PC, Chen BL, Yu YT. A semantic feature enhanced YOLOv5-based network for polyp detection from colonoscopy images. Sci Rep 2024; 14:15478. [PMID: 38969765 PMCID: PMC11226707 DOI: 10.1038/s41598-024-66642-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 07/03/2024] [Indexed: 07/07/2024] Open
Abstract
Colorectal cancer (CRC) is a common digestive system tumor with high morbidity and mortality worldwide. At present, the use of computer-assisted colonoscopy technology to detect polyps is relatively mature, but it still faces some challenges, such as missed or false detection of polyps. Therefore, how to improve the detection rate of polyps more accurately is the key to colonoscopy. To solve this problem, this paper proposes an improved YOLOv5-based cancer polyp detection method for colorectal cancer. The method is designed with a new structure called P-C3 incorporated into the backbone and neck network of the model to enhance the expression of features. In addition, a contextual feature augmentation module was introduced to the bottom of the backbone network to increase the receptive field for multi-scale feature information and to focus on polyp features by coordinate attention mechanism. The experimental results show that compared with some traditional target detection algorithms, the model proposed in this paper has significant advantages for the detection accuracy of polyp, especially in the recall rate, which largely solves the problem of missed detection of polyps. This study will contribute to improve the polyp/adenoma detection rate of endoscopists in the process of colonoscopy, and also has important significance for the development of clinical work.
Collapse
Affiliation(s)
- Jing-Jing Wan
- Department of Gastroenterology, The Second People's Hospital of Huai'an, The Affiliated Huai'an Hospital of Xuzhou Medical University, Huaian, 223023, Jiangsu, China.
| | - Peng-Cheng Zhu
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China.
| | - Bo-Lun Chen
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| | - Yong-Tao Yu
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| |
Collapse
|
4
|
Kumar A, Goyal A. Emerging molecules, tools, technology, and future of surgical knife in gastroenterology. World J Gastrointest Surg 2024; 16:988-998. [PMID: 38690056 PMCID: PMC11056674 DOI: 10.4240/wjgs.v16.i4.988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/18/2024] [Accepted: 04/03/2024] [Indexed: 04/22/2024] Open
Abstract
The 21st century has started with several innovations in the medical sciences, with wide applications in health care management. This development has taken in the field of medicines (newer drugs/molecules), various tools and technology which has completely changed the patient management including abdominal surgery. Surgery for abdominal diseases has moved from maximally invasive to minimally invasive (laparoscopic and robotic) surgery. Some of the newer medicines have its impact on need for surgical intervention. This article focuses on the development of these emerging molecules, tools, and technology and their impact on present surgical form and its future effects on the surgical intervention in gastroenterological diseases.
Collapse
Affiliation(s)
- Ashok Kumar
- Department of Surgical Gastroenterology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow 226014, Uttar Pradesh, India
| | - Anirudh Goyal
- Department of Surgical Gastroenterology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow 226014, Uttar Pradesh, India
| |
Collapse
|
5
|
Sharma A, Kumar R, Yadav G, Garg P. Artificial intelligence in intestinal polyp and colorectal cancer prediction. Cancer Lett 2023; 565:216238. [PMID: 37211068 DOI: 10.1016/j.canlet.2023.216238] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 05/17/2023] [Accepted: 05/17/2023] [Indexed: 05/23/2023]
Abstract
Artificial intelligence (AI) algorithms and their application to disease detection and decision support for healthcare professions have greatly evolved in the recent decade. AI has been widely applied and explored in gastroenterology for endoscopic analysis to diagnose intestinal cancers, premalignant polyps, gastrointestinal inflammatory lesions, and bleeding. Patients' responses to treatments and prognoses have both been predicted using AI by combining multiple algorithms. In this review, we explored the recent applications of AI algorithms in the identification and characterization of intestinal polyps and colorectal cancer predictions. AI-based prediction models have the potential to help medical practitioners diagnose, establish prognoses, and find accurate conclusions for the treatment of patients. With the understanding that rigorous validation of AI approaches using randomized controlled studies is solicited before widespread clinical use by health authorities, the article also discusses the limitations and challenges associated with deploying AI systems to diagnose intestinal malignancies and premalignant lesions.
Collapse
Affiliation(s)
- Anju Sharma
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S.A.S Nagar, 160062, Punjab, India
| | - Rajnish Kumar
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Uttar Pradesh, 226010, India; Department of Veterinary Medicine and Surgery, College of Veterinary Medicine, University of Missouri, Columbia, MO, USA
| | - Garima Yadav
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Uttar Pradesh, 226010, India
| | - Prabha Garg
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S.A.S Nagar, 160062, Punjab, India.
| |
Collapse
|
6
|
Mazumdar S, Sinha S, Jha S, Jagtap B. Computer-aided automated diminutive colonic polyp detection in colonoscopy by using deep machine learning system; first indigenous algorithm developed in India. Indian J Gastroenterol 2023; 42:226-232. [PMID: 37145230 DOI: 10.1007/s12664-022-01331-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 12/18/2022] [Indexed: 05/06/2023]
Abstract
BACKGROUND Colonic polyps can be detected and resected during a colonoscopy before cancer development. However, about 1/4th of the polyps could be missed due to their small size, location or human errors. An artificial intelligence (AI) system can improve polyp detection and reduce colorectal cancer incidence. We are developing an indigenous AI system to detect diminutive polyps in real-life scenarios that can be compatible with any high-definition colonoscopy and endoscopic video- capture software. METHODS We trained a masked region-based convolutional neural network model to detect and localize colonic polyps. Three independent datasets of colonoscopy videos comprising 1,039 image frames were used and divided into a training dataset of 688 frames and a testing dataset of 351 frames. Of 1,039 image frames, 231 were from real-life colonoscopy videos from our centre. The rest were from publicly available image frames already modified to be directly utilizable for developing the AI system. The image frames of the testing dataset were also augmented by rotating and zooming the images to replicate real-life distortions of images seen during colonoscopy. The AI system was trained to localize the polyp by creating a 'bounding box'. It was then applied to the testing dataset to test its accuracy in detecting polyps automatically. RESULTS The AI system achieved a mean average precision (equivalent to specificity) of 88.63% for automatic polyp detection. All polyps in the testing were identified by AI, i.e., no false-negative result in the testing dataset (sensitivity of 100%). The mean polyp size in the study was 5 (± 4) mm. The mean processing time per image frame was 96.4 minutes. CONCLUSIONS This AI system, when applied to real-life colonoscopy images, having wide variations in bowel preparation and small polyp size, can detect colonic polyps with a high degree of accuracy.
Collapse
Affiliation(s)
- Srijan Mazumdar
- Indian Institute of Liver and Digestive Sciences, Sitala (East), Jagadishpur, Sonarpur, 24 Parganas (South), Kolkata, 700 150, India.
| | - Saugata Sinha
- Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440 010, India
| | - Saurabh Jha
- Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440 010, India
| | - Balaji Jagtap
- Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440 010, India
| |
Collapse
|
7
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
8
|
Big Data in Gastroenterology Research. Int J Mol Sci 2023; 24:ijms24032458. [PMID: 36768780 PMCID: PMC9916510 DOI: 10.3390/ijms24032458] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 01/18/2023] [Accepted: 01/20/2023] [Indexed: 01/28/2023] Open
Abstract
Studying individual data types in isolation provides only limited and incomplete answers to complex biological questions and particularly falls short in revealing sufficient mechanistic and kinetic details. In contrast, multi-omics approaches to studying health and disease permit the generation and integration of multiple data types on a much larger scale, offering a comprehensive picture of biological and disease processes. Gastroenterology and hepatobiliary research are particularly well-suited to such analyses, given the unique position of the luminal gastrointestinal (GI) tract at the nexus between the gut (mucosa and luminal contents), brain, immune and endocrine systems, and GI microbiome. The generation of 'big data' from multi-omic, multi-site studies can enhance investigations into the connections between these organ systems and organisms and more broadly and accurately appraise the effects of dietary, pharmacological, and other therapeutic interventions. In this review, we describe a variety of useful omics approaches and how they can be integrated to provide a holistic depiction of the human and microbial genetic and proteomic changes underlying physiological and pathophysiological phenomena. We highlight the potential pitfalls and alternatives to help avoid the common errors in study design, execution, and analysis. We focus on the application, integration, and analysis of big data in gastroenterology and hepatobiliary research.
Collapse
|
9
|
Narasimha Raju AS, Jayavel K, Rajalakshmi T. Dexterous Identification of Carcinoma through ColoRectalCADx with Dichotomous Fusion CNN and UNet Semantic Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4325412. [PMID: 36262620 PMCID: PMC9576362 DOI: 10.1155/2022/4325412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 08/16/2022] [Accepted: 08/20/2022] [Indexed: 11/18/2022]
Abstract
Human colorectal disorders in the digestive tract are recognized by reference colonoscopy. The current system recognizes cancer through a three-stage system that utilizes two sets of colonoscopy data. However, identifying polyps by visualization has not been addressed. The proposed system is a five-stage system called ColoRectalCADx, which provides three publicly accessible datasets as input data for cancer detection. The three main datasets are CVC Clinic DB, Kvasir2, and Hyper Kvasir. After the image preprocessing stages, system experiments were performed with the seven prominent convolutional neural networks (CNNs) (end-to-end) and nine fusion CNN models to extract the spatial features. Afterwards, the end-to-end CNN and fusion features are executed. These features are derived from Discrete Wavelet Transform (DWT) and Vector Support Machine (SVM) classification, that was used to retrieve time and spatial frequency features. Experimentally, the results were obtained for five stages. For each of the three datasets, from stage 1 to stage 3, end-to-end CNN, DenseNet-201 obtained the best testing accuracy (98%, 87%, 84%), ((98%, 97%), (87%, 87%), (84%, 84%)), ((99.03%, 99%), (88.45%, 88%), (83.61%, 84%)). For each of the three datasets, from stage 2, CNN DaRD-22 fusion obtained the optimal test accuracy ((93%, 97%) (82%, 84%), (69%, 57%)). And for stage 4, ADaRDEV2-22 fusion achieved the best test accuracy ((95.73%, 94%), (81.20%, 81%), (72.56%, 58%)). For the input image segmentation datasets CVC Clinc-Seg, KvasirSeg, and Hyper Kvasir, malignant polyps were identified with the UNet CNN model. Here, the loss score datasets (CVC clinic DB was 0.7842, Kvasir2 was 0.6977, and Hyper Kvasir was 0.6910) were obtained.
Collapse
Affiliation(s)
- Akella S. Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Kayalvizhi Jayavel
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Thulasi Rajalakshmi
- Department of Electronics and Communication Engineering, School of Electrical and Electronics Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| |
Collapse
|
10
|
Yoo BS, Houston KV, D'Souza SM, Elmahdi A, Davis I, Vilela A, Parekh PJ, Johnson DA. Advances and horizons for artificial intelligence of endoscopic screening and surveillance of gastric and esophageal disease. Artif Intell Med Imaging 2022; 3:70-86. [DOI: 10.35711/aimi.v3.i3.70] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 05/18/2022] [Accepted: 06/20/2022] [Indexed: 02/06/2023] Open
Abstract
The development of artificial intelligence in endoscopic assessment of the gastrointestinal tract has shown progressive enhancement in diagnostic acuity. This review discusses the expanding applications for gastric and esophageal diseases. The gastric section covers the utility of AI in detecting and characterizing gastric polyps and further explores prevention, detection, and classification of gastric cancer. The esophageal discussion highlights applications for use in screening and surveillance in Barrett's esophagus and in high-risk conditions for esophageal squamous cell carcinoma. Additionally, these discussions highlight applications for use in assessing eosinophilic esophagitis and future potential in assessing esophageal microbiome changes.
Collapse
Affiliation(s)
- Byung Soo Yoo
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Kevin V Houston
- Department of Internal Medicine, Virginia Commonwealth University, Richmond, VA 23298, United States
| | - Steve M D'Souza
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Alsiddig Elmahdi
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Isaac Davis
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Ana Vilela
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Parth J Parekh
- Division of Gastroenterology, Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - David A Johnson
- Division of Gastroenterology, Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| |
Collapse
|
11
|
Yang CB, Kim SH, Lim YJ. Preparation of image databases for artificial intelligence algorithm development in gastrointestinal endoscopy. Clin Endosc 2022; 55:594-604. [PMID: 35636749 PMCID: PMC9539300 DOI: 10.5946/ce.2021.229] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 03/07/2022] [Indexed: 12/09/2022] Open
Abstract
Over the past decade, technological advances in deep learning have led to the introduction of artificial intelligence (AI) in medical imaging. The most commonly used structure in image recognition is the convolutional neural network, which mimics the action of the human visual cortex. The applications of AI in gastrointestinal endoscopy are diverse. Computer-aided diagnosis has achieved remarkable outcomes with recent improvements in machine-learning techniques and advances in computer performance. Despite some hurdles, the implementation of AI-assisted clinical practice is expected to aid endoscopists in real-time decision-making. In this summary, we reviewed state-of-the-art AI in the field of gastrointestinal endoscopy and offered a practical guide for building a learning image dataset for algorithm development.
Collapse
Affiliation(s)
- Chang Bong Yang
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang, Korea
| | - Sang Hoon Kim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang, Korea
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang, Korea
| |
Collapse
|
12
|
Abstract
Artificial intelligence (AI) is rapidly developing in various medical fields, and there is an increase in research performed in the field of gastrointestinal (GI) endoscopy. In particular, the advent of convolutional neural network, which is a class of deep learning method, has the potential to revolutionize the field of GI endoscopy, including esophagogastroduodenoscopy (EGD), capsule endoscopy (CE), and colonoscopy. A total of 149 original articles pertaining to AI (27 articles in esophagus, 30 articles in stomach, 29 articles in CE, and 63 articles in colon) were identified in this review. The main focuses of AI in EGD are cancer detection, identifying the depth of cancer invasion, prediction of pathological diagnosis, and prediction of Helicobacter pylori infection. In the field of CE, automated detection of bleeding sites, ulcers, tumors, and various small bowel diseases is being investigated. AI in colonoscopy has advanced with several patient-based prospective studies being conducted on the automated detection and classification of colon polyps. Furthermore, research on inflammatory bowel disease has also been recently reported. Most studies of AI in the field of GI endoscopy are still in the preclinical stages because of the retrospective design using still images. Video-based prospective studies are needed to advance the field. However, AI will continue to develop and be used in daily clinical practice in the near future. In this review, we have highlighted the published literature along with providing current status and insights into the future of AI in GI endoscopy.
Collapse
Affiliation(s)
- Yutaka Okagawa
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.,Department of Gastroenterology, Tonan Hospital, Sapporo, Japan
| | - Seiichiro Abe
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.
| | - Masayoshi Yamada
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Ichiro Oda
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| |
Collapse
|
13
|
Mohammad F, Al-Razgan M. Deep Feature Fusion and Optimization-Based Approach for Stomach Disease Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:2801. [PMID: 35408415 PMCID: PMC9003289 DOI: 10.3390/s22072801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/26/2022] [Accepted: 04/02/2022] [Indexed: 01/10/2023]
Abstract
Cancer is the deadliest disease among all the diseases and the main cause of human mortality. Several types of cancer sicken the human body and affect organs. Among all the types of cancer, stomach cancer is the most dangerous disease that spreads rapidly and needs to be diagnosed at an early stage. The early diagnosis of stomach cancer is essential to reduce the mortality rate. The manual diagnosis process is time-consuming, requires many tests, and the availability of an expert doctor. Therefore, automated techniques are required to diagnose stomach infections from endoscopic images. Many computerized techniques have been introduced in the literature but due to a few challenges (i.e., high similarity among the healthy and infected regions, irrelevant features extraction, and so on), there is much room to improve the accuracy and reduce the computational time. In this paper, a deep-learning-based stomach disease classification method employing deep feature extraction, fusion, and optimization using WCE images is proposed. The proposed method comprises several phases: data augmentation performed to increase the dataset images, deep transfer learning adopted for deep features extraction, feature fusion performed on deep extracted features, fused feature matrix optimized with a modified dragonfly optimization method, and final classification of the stomach disease was performed. The features extraction phase employed two pre-trained deep CNN models (Inception v3 and DenseNet-201) performing activation on feature derivation layers. Later, the parallel concatenation was performed on deep-derived features and optimized using the meta-heuristic method named the dragonfly algorithm. The optimized feature matrix was classified by employing machine-learning algorithms and achieved an accuracy of 99.8% on the combined stomach disease dataset. A comparison has been conducted with state-of-the-art techniques and shows improved accuracy.
Collapse
Affiliation(s)
- Farah Mohammad
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia
| | - Muna Al-Razgan
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia;
| |
Collapse
|
14
|
Hasan MM, Islam N, Rahman MM. Gastrointestinal polyp detection through a fusion of contourlet transform and Neural features. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2019.12.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
15
|
Intelligent Model for Brain Tumor Identification Using Deep Learning. APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING 2022. [DOI: 10.1155/2022/8104054] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Brain tumors can be a major cause of psychiatric complications such as depression and panic attacks. Quick and timely recognition of a brain tumor is more effective in tumor healing. The processing of medical images plays a crucial role in assisting humans in identifying different diseases. The classification of brain tumors is a significant part that depends on the expertise and knowledge of the physician. An intelligent system for detecting and classifying brain tumors is essential to help physicians. The novel feature of the study is the division of brain tumors into glioma, meningioma, and pituitary using a hierarchical deep learning method. The diagnosis and tumor classification are significant for the quick and productive cure, and medical image processing using a convolutional neural network (CNN) is giving excellent outcomes in this capacity. CNN uses the image fragments to train the data and classify them into tumor types. Hierarchical Deep Learning-Based Brain Tumor (HDL2BT) classification is proposed with the help of CNN for the detection and classification of brain tumors. The proposed system categorizes the tumor into four types: glioma, meningioma, pituitary, and no-tumor. The suggested model achieves 92.13% precision and a miss rate of 7.87%, being superior to earlier methods for detecting and segmentation brain tumors. The proposed system will provide clinical assistance in the area of medicine.
Collapse
|
16
|
Wang W, Yang X, Li X, Tang J. Convolutional‐capsule network for gastrointestinal endoscopy image classification. INT J INTELL SYST 2022. [DOI: 10.1002/int.22815] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Affiliation(s)
- Wei Wang
- School of Computer Science and Engineering, Nanjing University of Science and Technology Nanjing Jiangsu China
| | - Xin Yang
- School of Electronic Information and Communications, Huazhong University of Science and Technology Wuhan Hubei China
| | - Xin Li
- Department of Radiology Union Hospital, Tongji Medical College, Huazhong University of Science and Technology Wuhan Hubei China
- Hubei Province Key Laboratory of Molecular Imaging Wuhan Hubei China
| | - Jinhui Tang
- School of Computer Science and Engineering, Nanjing University of Science and Technology Nanjing Jiangsu China
| |
Collapse
|
17
|
Taghiakbari M, Mori Y, von Renteln D. Artificial intelligence-assisted colonoscopy: A review of current state of practice and research. World J Gastroenterol 2021; 27:8103-8122. [PMID: 35068857 PMCID: PMC8704267 DOI: 10.3748/wjg.v27.i47.8103] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 08/22/2021] [Accepted: 12/08/2021] [Indexed: 02/06/2023] Open
Abstract
Colonoscopy is an effective screening procedure in colorectal cancer prevention programs; however, colonoscopy practice can vary in terms of lesion detection, classification, and removal. Artificial intelligence (AI)-assisted decision support systems for endoscopy is an area of rapid research and development. The systems promise improved detection, classification, screening, and surveillance for colorectal polyps and cancer. Several recently developed applications for AI-assisted colonoscopy have shown promising results for the detection and classification of colorectal polyps and adenomas. However, their value for real-time application in clinical practice has yet to be determined owing to limitations in the design, validation, and testing of AI models under real-life clinical conditions. Despite these current limitations, ambitious attempts to expand the technology further by developing more complex systems capable of assisting and supporting the endoscopist throughout the entire colonoscopy examination, including polypectomy procedures, are at the concept stage. However, further work is required to address the barriers and challenges of AI integration into broader colonoscopy practice, to navigate the approval process from regulatory organizations and societies, and to support physicians and patients on their journey to accepting the technology by providing strong evidence of its accuracy and safety. This article takes a closer look at the current state of AI integration into the field of colonoscopy and offers suggestions for future research.
Collapse
Affiliation(s)
- Mahsa Taghiakbari
- Department of Gastroenterology, CRCHUM, Montreal H2X 0A9, Quebec, Canada
| | - Yuichi Mori
- Clinical Effectiveness Research Group, University of Oslo, Oslo 0450, Norway
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama 224-8503, Japan
| | - Daniel von Renteln
- Department of Gastroenterology, CRCHUM, Montreal H2X 0A9, Quebec, Canada
| |
Collapse
|
18
|
Chen BL, Wan JJ, Chen TY, Yu YT, Ji M. A self-attention based faster R-CNN for polyp detection from colonoscopy images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103019] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
19
|
Guo X, Zhang L, Hao Y, Zhang L, Liu Z, Liu J. Multiple abnormality classification in wireless capsule endoscopy images based on EfficientNet using attention mechanism. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2021; 92:094102. [PMID: 34598534 DOI: 10.1063/5.0054161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 08/13/2021] [Indexed: 06/13/2023]
Abstract
The wireless capsule endoscopy (WCE) procedure produces tens of thousands of images of the digestive tract, for which the use of the manual reading process is full of challenges. Convolutional neural networks are used to automatically detect lesions in WCE images. However, studies on clinical multilesion detection are scarce, and it is difficult to effectively balance the sensitivity to multiple lesions. A strategy for detecting multiple lesions is proposed, wherein common vascular and inflammatory lesions can be automatically and quickly detected on capsule endoscopic images. Based on weakly supervised learning, EfficientNet is fine-tuned to extract the endoscopic image features. Combining spatial features and channel features, the proposed attention network is then used as a classifier to obtain three classifications. The accuracy and speed of the model were compared with those of the ResNet121 and InceptionNetV4 models. It was tested on a public WCE image dataset obtained from 4143 subjects. On the computer-assisted diagnosis for capsule endoscopy database, the method gives a sensitivity of 96.67% for vascular lesions and 93.33% for inflammatory lesions. The precision for vascular lesions was 92.80%, and that for inflammatory lesions was 95.73%. The accuracy was 96.11%, which is 1.11% higher than that of the latest InceptionNetV4 network. Prediction for an image only requires 14 ms, which balances the accuracy and speed comparatively better. This strategy can be used as an auxiliary diagnostic method for specialists for the rapid reading of clinical capsule endoscopes.
Collapse
Affiliation(s)
- Xudong Guo
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Lulu Zhang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Youguo Hao
- Department of Rehabilitation, Shanghai Putuo People's Hospital, Shanghai 200060, China
| | - Linqi Zhang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Zhang Liu
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Jiannan Liu
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| |
Collapse
|
20
|
Zhou J, Hu N, Huang ZY, Song B, Wu CC, Zeng FX, Wu M. Application of artificial intelligence in gastrointestinal disease: a narrative review. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:1188. [PMID: 34430629 PMCID: PMC8350704 DOI: 10.21037/atm-21-3001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 06/29/2021] [Indexed: 02/05/2023]
Abstract
Objective We collected evidence on the application of artificial intelligence (AI) in gastroenterology field. The review was carried out from two aspects of endoscopic types and gastrointestinal diseases, and briefly summarized the challenges and future directions in this field. Background Due to the advancement of computational power and a surge of available data, a solid foundation has been laid for the growth of AI. Specifically, varied machine learning (ML) techniques have been emerging in endoscopic image analysis. To improve the accuracy and efficiency of clinicians, AI has been widely applied to gastrointestinal endoscopy. Methods PubMed electronic database was searched using the keywords containing “AI”, “ML”, “deep learning (DL)”, “convolution neural network”, “endoscopy (such as white light endoscopy (WLE), narrow band imaging (NBI) endoscopy, magnifying endoscopy with narrow band imaging (ME-NBI), chromoendoscopy, endocytoscopy (EC), and capsule endoscopy (CE))”. Search results were assessed for relevance and then used for detailed discussion. Conclusions This review described the basic knowledge of AI, ML, and DL, and summarizes the application of AI in various endoscopes and gastrointestinal diseases. Finally, the challenges and directions of AI in clinical application were discussed. At present, the application of AI has solved some clinical problems, but more still needs to be done.
Collapse
Affiliation(s)
- Jun Zhou
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.,Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, China
| | - Na Hu
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Zhi-Yin Huang
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Bin Song
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Chun-Cheng Wu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Fan-Xin Zeng
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, China
| | - Min Wu
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.,Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, China
| |
Collapse
|
21
|
Durak S, Bayram B, Bakırman T, Erkut M, Doğan M, Gürtürk M, Akpınar B. Deep neural network approaches for detecting gastric polyps in endoscopic images. Med Biol Eng Comput 2021; 59:1563-1574. [PMID: 34259974 DOI: 10.1007/s11517-021-02398-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 06/18/2021] [Indexed: 12/18/2022]
Abstract
Gastrointestinal endoscopy is the primary method used for the diagnosis and treatment of gastric polyps. The early detection and removal of polyps is vitally important in preventing cancer development. Many studies indicate that a high workload can contribute to misdiagnosing gastric polyps, even for experienced physicians. In this study, we aimed to establish a deep learning-based computer-aided diagnosis system for automatic gastric polyp detection. A private gastric polyp dataset was generated for this purpose consisting of 2195 endoscopic images and 3031 polyp labels. Retrospective gastrointestinal endoscopy data from the Karadeniz Technical University, Farabi Hospital, were used in the study. YOLOv4, CenterNet, EfficientNet, Cross Stage ResNext50-SPP, YOLOv3, YOLOv3-SPP, Single Shot Detection, and Faster Regional CNN deep learning models were implemented and assessed to determine the most efficient model for precancerous gastric polyp detection. The dataset was split 70% and 30% for training and testing all the implemented models. YOLOv4 was determined to be the most accurate model, with an 87.95% mean average precision. We also evaluated all the deep learning models using a public gastric polyp dataset as the test data. The results show that YOLOv4 has significant potential applicability in detecting gastric polyps and can be used effectively in gastrointestinal CAD systems. Gastric Polyp Detection Process using Deep Learning with Private Dataset.
Collapse
Affiliation(s)
- Serdar Durak
- Faculty of Medicine, Department of Gastroenterology, Karadeniz Technical University, Trabzon, Turkey
| | - Bülent Bayram
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Tolga Bakırman
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey.
| | - Murat Erkut
- Faculty of Medicine, Department of Gastroenterology, Karadeniz Technical University, Trabzon, Turkey
| | - Metehan Doğan
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Mert Gürtürk
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Burak Akpınar
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| |
Collapse
|
22
|
Parsa N, Byrne MF. Artificial intelligence for identification and characterization of colonic polyps. Ther Adv Gastrointest Endosc 2021; 14:26317745211014698. [PMID: 34263163 PMCID: PMC8252334 DOI: 10.1177/26317745211014698] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 04/07/2021] [Indexed: 12/27/2022] Open
Abstract
Colonoscopy remains the gold standard exam for colorectal cancer screening due to its ability to detect and resect pre-cancerous lesions in the colon. However, its performance is greatly operator dependent. Studies have shown that up to one-quarter of colorectal polyps can be missed on a single colonoscopy, leading to high rates of interval colorectal cancer. In addition, the American Society for Gastrointestinal Endoscopy has proposed the “resect-and-discard” and “diagnose-and-leave” strategies for diminutive colorectal polyps to reduce the costs of unnecessary polyp resection and pathology evaluation. However, the performance of optical biopsy has been suboptimal in community practice. With recent improvements in machine-learning techniques, artificial intelligence–assisted computer-aided detection and diagnosis have been increasingly utilized by endoscopists. The application of computer-aided design on real-time colonoscopy has been shown to increase the adenoma detection rate while decreasing the withdrawal time and improve endoscopists’ optical biopsy accuracy, while reducing the time to make the diagnosis. These are promising steps toward standardization and improvement of colonoscopy quality, and implementation of “resect-and-discard” and “diagnose-and-leave” strategies. Yet, issues such as real-world applications and regulatory approval need to be addressed before artificial intelligence models can be successfully implemented in clinical practice. In this review, we summarize the recent literature on the application of artificial intelligence for detection and characterization of colorectal polyps and review the limitation of existing artificial intelligence technologies and future directions for this field.
Collapse
Affiliation(s)
- Nasim Parsa
- Division of Gastroenterology and Hepatology, Department of Medicine, University of Missouri, Columbia, MO 65211, USA
| | - Michael F Byrne
- Division of Gastroenterology, Department of Medicine, The University of British Columbia, Vancouver, BC, Canada; Satisfai Health, Vancouver, BC, Canada
| |
Collapse
|
23
|
Shah N, Jyala A, Patel H, Makker J. Utility of artificial intelligence in colonoscopy. Artif Intell Gastrointest Endosc 2021; 2:79-88. [DOI: 10.37126/aige.v2.i3.79] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 06/20/2021] [Accepted: 06/28/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer is one of the major causes of death worldwide. Colonoscopy is the most important tool that can identify neoplastic lesion in early stages and resect it in a timely manner which helps in reducing mortality related to colorectal cancer. However, the quality of colonoscopy findings depends on the expertise of the endoscopist and thus the rate of missed adenoma or polyp cannot be controlled. It is desirable to standardize the quality of colonoscopy by reducing the number of missed adenoma/polyps. Introduction of artificial intelligence (AI) in the field of medicine has become popular among physicians nowadays. The application of AI in colonoscopy can help in reducing miss rate and increasing colorectal cancer detection rate as per recent studies. Moreover, AI assistance during colonoscopy has also been utilized in patients with inflammatory bowel disease to improve diagnostic accuracy, assessing disease severity and predicting clinical outcomes. We conducted a literature review on the available evidence on use of AI in colonoscopy. In this review article, we discuss about the principles, application, limitations, and future aspects of AI in colonoscopy.
Collapse
Affiliation(s)
- Niel Shah
- Department of Internal Medicine, BronxCare Hospital Center, Bronx, NY 10457, United States
| | - Abhilasha Jyala
- Department of Internal Medicine, BronxCare Hospital Center, Bronx, NY 10457, United States
| | - Harish Patel
- Department of Internal Medicine, Gastroenterology, BronxCare Hospital Center, Bronx, NY 10457, United States
| | - Jasbir Makker
- Department of Internal Medicine, Gastroenterology, BronxCare Hospital Center, Bronx, NY 10457, United States
| |
Collapse
|
24
|
Shah N, Jyala A, Patel H, Makker J. Utility of artificial intelligence in colonoscopy. Artif Intell Gastrointest Endosc 2021. [DOI: 10.37126/aige.v2.i3.78] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
|
25
|
Lazăr DC, Avram MF, Faur AC, Romoşan I, Goldiş A. The role of computer-assisted systems for upper-endoscopy quality monitoring and assessment of gastric lesions. Gastroenterol Rep (Oxf) 2021; 9:185-204. [PMID: 34316369 PMCID: PMC8309682 DOI: 10.1093/gastro/goab008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 12/05/2020] [Accepted: 12/20/2020] [Indexed: 12/24/2022] Open
Abstract
This article analyses the literature regarding the value of computer-assisted systems in esogastroduodenoscopy-quality monitoring and the assessment of gastric lesions. Current data show promising results in upper-endoscopy quality control and a satisfactory detection accuracy of gastric premalignant and malignant lesions, similar or even exceeding that of experienced endoscopists. Moreover, artificial systems enable the decision for the best treatment strategies in gastric-cancer patient care, namely endoscopic vs surgical resection according to tumor depth. In so doing, unnecessary surgical interventions would be avoided whilst providing a better quality of life and prognosis for these patients. All these performance data have been revealed by numerous studies using different artificial intelligence (AI) algorithms in addition to white-light endoscopy or novel endoscopic techniques that are available in expert endoscopy centers. It is expected that ongoing clinical trials involving AI and the embedding of computer-assisted diagnosis systems into endoscopic devices will enable real-life implementation of AI endoscopic systems in the near future and at the same time will help to overcome the current limits of the computer-assisted systems leading to an improvement in performance. These benefits should lead to better diagnostic and treatment strategies for gastric-cancer patients. Furthermore, the incorporation of AI algorithms in endoscopic tools along with the development of large electronic databases containing endoscopic images might help in upper-endoscopy assistance and could be used for telemedicine purposes and second opinion for difficult cases.
Collapse
Affiliation(s)
- Daniela Cornelia Lazăr
- Department V of Internal Medicine I, Discipline of Internal Medicine IV, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania,Timișoara, Romania
| | - Mihaela Flavia Avram
- Department of Surgery X, 1st Surgery Discipline, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Timișoara, Romania
| | - Alexandra Corina Faur
- Department I, Discipline of Anatomy and Embriology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Timișoara, Romania
| | - Ioan Romoşan
- Department V of Internal Medicine I, Discipline of Internal Medicine IV, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania,Timișoara, Romania
| | - Adrian Goldiş
- Department VII of Internal Medicine II, Discipline of Gastroenterology and Hepatology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Timișoara, Romania
| |
Collapse
|
26
|
Shao Y, Zhang YX, Chen HH, Lu SS, Zhang SC, Zhang JX. Advances in the application of artificial intelligence in solid tumor imaging. Artif Intell Cancer 2021; 2:12-24. [DOI: 10.35713/aic.v2.i2.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 04/02/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Early diagnosis and timely treatment are crucial in reducing cancer-related mortality. Artificial intelligence (AI) has greatly relieved clinical workloads and changed the current medical workflows. We searched for recent studies, reports and reviews referring to AI and solid tumors; many reviews have summarized AI applications in the diagnosis and treatment of a single tumor type. We herein systematically review the advances of AI application in multiple solid tumors including esophagus, stomach, intestine, breast, thyroid, prostate, lung, liver, cervix, pancreas and kidney with a specific focus on the continual improvement on model performance in imaging practice.
Collapse
Affiliation(s)
- Ying Shao
- Department of Laboratory Medicine, People Hospital of Jiangying, Jiangying 214400, Jiangsu Province, China
| | - Yu-Xuan Zhang
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Huan-Huan Chen
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Shan-Shan Lu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Shi-Chang Zhang
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Jie-Xin Zhang
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| |
Collapse
|
27
|
Cao C, Wang R, Yu Y, zhang H, Yu Y, Sun C. Gastric polyp detection in gastroscopic images using deep neural network. PLoS One 2021; 16:e0250632. [PMID: 33909671 PMCID: PMC8081222 DOI: 10.1371/journal.pone.0250632] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 04/08/2021] [Indexed: 12/26/2022] Open
Abstract
This paper presents the research results of detecting gastric polyps with deep learning object detection method in gastroscopic images. Gastric polyps have various sizes. The difficulty of polyp detection is that small polyps are difficult to detect from the background. We propose a feature extraction and fusion module and combine it with the YOLOv3 network to form our network. This method performs better than other methods in the detection of small polyps because it can fuse the semantic information of high-level feature maps with low-level feature maps to help small polyps detection. In this work, we use a dataset of gastric polyps created by ourselves, containing 1433 training images and 508 validation images. We train and validate our network on our dataset. In comparison with other methods of polyps detection, our method has a significant improvement in precision, recall rate, F1, and F2 score. The precision, recall rate, F1 score, and F2 score of our method can achieve 91.6%, 86.2%, 88.8%, and 87.2%.
Collapse
Affiliation(s)
- Chanting Cao
- Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Ruilin Wang
- Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Yao Yu
- Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
- * E-mail:
| | - Hui zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Ying Yu
- Beijing An Zhen Hospital, Beijing, China
| | - Changyin Sun
- School of Automation, Southeast University, Nanjing, China
| |
Collapse
|
28
|
Mitsala A, Tsalikidis C, Pitiakoudis M, Simopoulos C, Tsaroucha AK. Artificial Intelligence in Colorectal Cancer Screening, Diagnosis and Treatment. A New Era. ACTA ACUST UNITED AC 2021; 28:1581-1607. [PMID: 33922402 PMCID: PMC8161764 DOI: 10.3390/curroncol28030149] [Citation(s) in RCA: 79] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 04/09/2021] [Accepted: 04/20/2021] [Indexed: 12/24/2022]
Abstract
The development of artificial intelligence (AI) algorithms has permeated the medical field with great success. The widespread use of AI technology in diagnosing and treating several types of cancer, especially colorectal cancer (CRC), is now attracting substantial attention. CRC, which represents the third most commonly diagnosed malignancy in both men and women, is considered a leading cause of cancer-related deaths globally. Our review herein aims to provide in-depth knowledge and analysis of the AI applications in CRC screening, diagnosis, and treatment based on current literature. We also explore the role of recent advances in AI systems regarding medical diagnosis and therapy, with several promising results. CRC is a highly preventable disease, and AI-assisted techniques in routine screening represent a pivotal step in declining incidence rates of this malignancy. So far, computer-aided detection and characterization systems have been developed to increase the detection rate of adenomas. Furthermore, CRC treatment enters a new era with robotic surgery and novel computer-assisted drug delivery techniques. At the same time, healthcare is rapidly moving toward precision or personalized medicine. Machine learning models have the potential to contribute to individual-based cancer care and transform the future of medicine.
Collapse
Affiliation(s)
- Athanasia Mitsala
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
- Correspondence: ; Tel.: +30-6986423707
| | - Christos Tsalikidis
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Michail Pitiakoudis
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Constantinos Simopoulos
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Alexandra K. Tsaroucha
- Laboratory of Experimental Surgery & Surgical Research, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece;
| |
Collapse
|
29
|
Attallah O, Sharkas M. GASTRO-CADx: a three stages framework for diagnosing gastrointestinal diseases. PeerJ Comput Sci 2021; 7:e423. [PMID: 33817058 PMCID: PMC7959662 DOI: 10.7717/peerj-cs.423] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/11/2021] [Indexed: 05/04/2023]
Abstract
Gastrointestinal (GI) diseases are common illnesses that affect the GI tract. Diagnosing these GI diseases is quite expensive, complicated, and challenging. A computer-aided diagnosis (CADx) system based on deep learning (DL) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis. Therefore, this article proposes a CADx system called Gastro-CADx to classify several GI diseases using DL techniques. Gastro-CADx involves three progressive stages. Initially, four different CNNs are used as feature extractors to extract spatial features. Most of the related work based on DL approaches extracted spatial features only. However, in the following phase of Gastro-CADx, features extracted in the first stage are applied to the discrete wavelet transform (DWT) and the discrete cosine transform (DCT). DCT and DWT are used to extract temporal-frequency and spatial-frequency features. Additionally, a feature reduction procedure is performed in this stage. Finally, in the third stage of the Gastro-CADx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the CADx and select the best-fused feature set. Two datasets referred to as Dataset I and II are utilized to evaluate the performance of Gastro-CADx. Results indicated that Gastro-CADx has achieved an accuracy of 97.3% and 99.7% for Dataset I and II respectively. The results were compared with recent related works. The comparison showed that the proposed approach is capable of classifying GI diseases with higher accuracy compared to other work. Thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. It can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| | - Maha Sharkas
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
30
|
Wavelet Transform and Deep Convolutional Neural Network-Based Smart Healthcare System for Gastrointestinal Disease Detection. Interdiscip Sci 2021; 13:212-228. [PMID: 33566337 DOI: 10.1007/s12539-021-00417-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 01/16/2021] [Accepted: 01/23/2021] [Indexed: 12/19/2022]
Abstract
This work presents a smart healthcare system for the detection of various abnormalities present in the gastrointestinal (GI) region with the help of time-frequency analysis and convolutional neural network. In this regard, the KVASIR V2 dataset comprising of eight classes of GI-tract images such as Normal cecum, Normal pylorus, Normal Z-line, Esophagitis, Polyps, Ulcerative Colitis, Dyed and lifted polyp, and Dyed resection margins are used for training and validation. The initial phase of the work involves an image pre-processing step, followed by the extraction of approximate discrete wavelet transform coefficients. Each class of decomposed images is later given as input to a couple of considered convolutional neural network (CNN) models for training and testing in two different classification levels to recognize its predicted value. Afterward, the classification performance is measured through the following measuring indices: accuracy, precision, recall, specificity, and F1 score. The experimental result shows 97.25% and 93.75% of accuracy in the first level and second level of classification, respectively. Lastly, a comparative performance analysis is carried out with several other previously published works on a similar dataset where the proposed approach performs better than its contemporary methods.
Collapse
|
31
|
New polyp image classification technique using transfer learning of network-in-network structure in endoscopic images. Sci Rep 2021; 11:3605. [PMID: 33574394 PMCID: PMC7878472 DOI: 10.1038/s41598-021-83199-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Accepted: 01/18/2021] [Indexed: 12/13/2022] Open
Abstract
While colorectal cancer is known to occur in the gastrointestinal tract. It is the third most common form of cancer of 27 major types of cancer in South Korea and worldwide. Colorectal polyps are known to increase the potential of developing colorectal cancer. Detected polyps need to be resected to reduce the risk of developing cancer. This research improved the performance of polyp classification through the fine-tuning of Network-in-Network (NIN) after applying a pre-trained model of the ImageNet database. Random shuffling is performed 20 times on 1000 colonoscopy images. Each set of data are divided into 800 images of training data and 200 images of test data. An accuracy evaluation is performed on 200 images of test data in 20 experiments. Three compared methods were constructed from AlexNet by transferring the weights trained by three different state-of-the-art databases. A normal AlexNet based method without transfer learning was also compared. The accuracy of the proposed method was higher in statistical significance than the accuracy of four other state-of-the-art methods, and showed an 18.9% improvement over the normal AlexNet based method. The area under the curve was approximately 0.930 ± 0.020, and the recall rate was 0.929 ± 0.029. An automatic algorithm can assist endoscopists in identifying polyps that are adenomatous by considering a high recall rate and accuracy. This system can enable the timely resection of polyps at an early stage.
Collapse
|
32
|
Abstract
PURPOSE OF REVIEW Machine learning (ML) algorithms have augmented human judgment in various fields of clinical medicine. However, little progress has been made in applying these tools to video-endoscopy. We reviewed the field of video-analysis (herein termed 'Videomics' for the first time) as applied to diagnostic endoscopy, assessing its preliminary findings, potential, as well as limitations, and consider future developments. RECENT FINDINGS ML has been applied to diagnostic endoscopy with different aims: blind-spot detection, automatic quality control, lesion detection, classification, and characterization. The early experience in gastrointestinal endoscopy has recently been expanded to the upper aerodigestive tract, demonstrating promising results in both clinical fields. From top to bottom, multispectral imaging (such as Narrow Band Imaging) appeared to provide significant information drawn from endoscopic images. SUMMARY Videomics is an emerging discipline that has the potential to significantly improve human detection and characterization of clinically significant lesions during endoscopy across medical and surgical disciplines. Research teams should focus on the standardization of data collection, identification of common targets, and optimal reporting. With such a collaborative stepwise approach, Videomics is likely to soon augment clinical endoscopy, significantly impacting cancer patient outcomes.
Collapse
|
33
|
Comparison of deep learning and conventional machine learning methods for classification of colon polyp types. EUROBIOTECH JOURNAL 2021. [DOI: 10.2478/ebtj-2021-0006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Abstract
Determination of polyp types requires tissue biopsy during colonoscopy and then histopathological examination of the microscopic images which tremendously time-consuming and costly. The first aim of this study was to design a computer-aided diagnosis system to classify polyp types using colonoscopy images (optical biopsy) without the need for tissue biopsy. For this purpose, two different approaches were designed based on conventional machine learning (ML) and deep learning. Firstly, classification was performed using random forest approach by means of the features obtained from the histogram of gradients descriptor. Secondly, simple convolutional neural networks (CNN) based architecture was built to train with the colonoscopy images containing colon polyps. The performances of these approaches on two (adenoma & serrated vs. hyperplastic) or three (adenoma vs. hyperplastic vs. serrated) category classifications were investigated. Furthermore, the effect of imaging modality on the classification was also examined using white-light and narrow band imaging systems. The performance of these approaches was compared with the results obtained by 3 novice and 4 expert doctors. Two-category classification results showed that conventional ML approach achieved significantly better than the simple CNN based approach did in both narrow band and white-light imaging modalities. The accuracy reached almost 95% for white-light imaging. This performance surpassed the correct classification rate of all 7 doctors. Additionally, the second task (three-category) results indicated that the simple CNN architecture outperformed both conventional ML based approaches and the doctors. This study shows the feasibility of using conventional machine learning or deep learning based approaches in automatic classification of colon types on colonoscopy images.
Collapse
|
34
|
Computerized classification of gastrointestinal polyps using stacking ensemble of convolutional neural network. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100603] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
|
35
|
Golhar M, Bobrow TL, Khoshknab MP, Jit S, Ngamruengphong S, Durr NJ. Improving Colonoscopy Lesion Classification Using Semi-Supervised Deep Learning. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:631-640. [PMID: 33747680 PMCID: PMC7978231 DOI: 10.1109/access.2020.3047544] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
While data-driven approaches excel at many image analysis tasks, the performance of these approaches is often limited by a shortage of annotated data available for training. Recent work in semi-supervised learning has shown that meaningful representations of images can be obtained from training with large quantities of unlabeled data, and that these representations can improve the performance of supervised tasks. Here, we demonstrate that an unsupervised jigsaw learning task, in combination with supervised training, results in up to a 9.8% improvement in correctly classifying lesions in colonoscopy images when compared to a fully-supervised baseline. We additionally benchmark improvements in domain adaptation and out-of-distribution detection, and demonstrate that semi-supervised learning outperforms supervised learning in both cases. In colonoscopy applications, these metrics are important given the skill required for endoscopic assessment of lesions, the wide variety of endoscopy systems in use, and the homogeneity that is typical of labeled datasets.
Collapse
Affiliation(s)
- Mayank Golhar
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Taylor L Bobrow
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | - Simran Jit
- Division of Gastroenterology and Hepatology, Johns Hopkins Hospital, Baltimore, MD 21287, USA
| | - Saowanee Ngamruengphong
- Division of Gastroenterology and Hepatology, Johns Hopkins Hospital, Baltimore, MD 21287, USA
| | - Nicholas J Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
36
|
Le A, Salifu MO, McFarlane IM. Artificial Intelligence in Colorectal Polyp Detection and Characterization. INTERNATIONAL JOURNAL OF CLINICAL RESEARCH & TRIALS 2021; 6:157. [PMID: 33884326 PMCID: PMC8057724 DOI: 10.15344/2456-8007/2021/157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
BACKGROUND Over the past 20 years, the advancement of artificial intelligence (AI) and deep learning (DL) has allowed for fast sorting and analysis of large sets of data. In the field of gastroenterology, colorectal screening procedures produces an abundance of data through video and imaging. With AI and DL, this information can be used to create systems where automatic polyp detection and characterization is possible. Convoluted Neural Networks (CNNs) have proven to be an effective way to increase polyp detection and ultimately adenoma detection rates. Different methods of polyp characterization of being hyperplastic vs. adenomatous or non-neoplastic vs. neoplastic has also been investigated showing promising results. FINDINGS The rate of missed polyps on colonoscopy can be as high as 25%. At the beginning of the 2000s, hand-crafted machine learning (ML) algorithms were created and trained retrospectively on colonoscopy images and videos, achieving high sensitivity, specificity, and accuracy of over 90% in many of the studies. Over time, the advancement of DL and CNNs has allowed algorithms to be trained on non-medical images and applied retrospectively to colonoscopy videos and images with similar results. Within the past few years, these algorithms have been applied in real-time colonoscopies and has shown mixed results, one showing no difference while others showing increased polyp detection.Various methods of polyp characterization have also been investigated. Through AI, DL, and CNNs polyps can be identified has hyperplastic/adenomatous or non-neoplastic/neoplastic with high sensitivity, specificity, and accuracy. One of the research areas in polyp characterization is how to capture the polyp image. This paper looks at different modalities of characterizing polyps such as magnifying narrow band imaging (NBI), endocytoscopy, laser-induced florescent spectroscopy, auto-florescent endoscopy, and white-light endoscopy. CONCLUSIONS Overall, much progress has been made in automatic detection and characterization of polyps in real time. Barring ethical or mass adoption setbacks, it is inevitable that AI will be involved in the field of GI, especially in colorectal polyp detection and identification.
Collapse
Affiliation(s)
| | | | - Isabel M. McFarlane
- Corresponding Author: Dr. Isabel M. McFarlane, Clinical Assistant Professor of Medicine, Director, Third Year Internal Medicine Clerkship, Department of Internal Medicine, Brooklyn, NY 11203, USA Tel: 718-270-2390, Fax: 718-270-1324;
| |
Collapse
|
37
|
Use of artificial intelligence for detection of gastric lesions by magnetically controlled capsule endoscopy. Gastrointest Endosc 2021; 93:133-139.e4. [PMID: 32470426 DOI: 10.1016/j.gie.2020.05.027] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Accepted: 05/01/2020] [Indexed: 02/08/2023]
Abstract
BACKGROUND AND AIMS Magnetically controlled capsule endoscopy (MCE) has become an efficient diagnostic modality for gastric diseases. We developed a novel automatic gastric lesion detection system to assist in diagnosis and reduce inter-physician variations. This study aimed to evaluate the diagnostic capability of the computer-aided detection system for MCE images. METHODS We developed a novel automatic gastric lesion detection system based on a convolutional neural network (CNN) and faster region-based convolutional neural network (RCNN). A total of 1,023,955 MCE images from 797 patients were used to train and test the system. These images were divided into 7 categories (erosion, polyp, ulcer, submucosal tumor, xanthoma, normal mucosa, and invalid images). The primary endpoint was the sensitivity of the system. RESULTS The system detected gastric focal lesions with 96.2% sensitivity (95% confidence interval [CI], 95.7%-96.5%), 76.2% specificity (95% CI, 75.97%-76.3%), 16.0% positive predictive value (95% CI, 15.7%-16.3%), 99.7% negative predictive value (95% CI, 99.74%-99.79%), and 77.1% accuracy (95% CI, 76.9%-77.3%) (sensitivity was 99.3% for erosions; 96.5% for polyps; 89.3% for ulcers; 87.2% for submucosal tumors; 90.6% for xanthomas; 67.8% for normal; and 96.1% for invalid images). Analysis of the receiver operating characteristic curve showed that the area under the curve for all positive images was 0.84. Image processing time was 44 milliseconds per image for the system and 0.38 ± 0.29 seconds per image for clinicians (P < .001). The kappa value of 2 times repeated reads was 1. CONCLUSIONS The CNN faster-RCNN-based diagnostic program system showed good performance in diagnosing gastric focal lesions in MCE images.
Collapse
|
38
|
Sinonquel P, Eelbode T, Bossuyt P, Maes F, Bisschops R. Artificial intelligence and its impact on quality improvement in upper and lower gastrointestinal endoscopy. Dig Endosc 2021; 33:242-253. [PMID: 33145847 DOI: 10.1111/den.13888] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 10/14/2020] [Accepted: 11/01/2020] [Indexed: 12/24/2022]
Abstract
Artificial intelligence (AI) and its application in medicine has grown large interest. Within gastrointestinal (GI) endoscopy, the field of colonoscopy and polyp detection is the most investigated, however, upper GI follows the lead. Since endoscopy is performed by humans, it is inherently an imperfect procedure. Computer-aided diagnosis may improve its quality by helping prevent missing lesions and supporting optical diagnosis for those detected. An entire evolution in AI systems has been established in the last decades, resulting in optimization of the diagnostic performance with lower variability and matching or even outperformance of expert endoscopists. This shows a great potential for future quality improvement of endoscopy, given the outstanding diagnostic features of AI. With this narrative review, we highlight the potential benefit of AI to improve overall quality in daily endoscopy and describe the most recent developments for characterization and diagnosis as well as the recent conditions for regulatory approval.
Collapse
Affiliation(s)
- Pieter Sinonquel
- Department of Gastroenterology and Hepatology, University Hospitals Leuven, Leuven, Belgium.,Departments of, Department of, Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven, Leuven, Belgium
| | - Tom Eelbode
- Medical Imaging Research Center (MIRC), University Hospitals Leuven, Leuven, Belgium.,Department of Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Peter Bossuyt
- Department of Gastroenterology and Hepatology, University Hospitals Leuven, Leuven, Belgium.,Department of Gastroenterology and Hepatology, Imelda Hospital, Bonheiden, Belgium
| | - Frederik Maes
- Medical Imaging Research Center (MIRC), University Hospitals Leuven, Leuven, Belgium.,Department of Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Raf Bisschops
- Department of Gastroenterology and Hepatology, University Hospitals Leuven, Leuven, Belgium.,Departments of, Department of, Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven, Leuven, Belgium
| |
Collapse
|
39
|
Saito H, Tanimoto T, Ozawa T, Ishihara S, Fujishiro M, Shichijo S, Hirasawa D, Matsuda T, Endo Y, Tada T. Automatic anatomical classification of colonoscopic images using deep convolutional neural networks. Gastroenterol Rep (Oxf) 2020; 9:226-233. [PMID: 34316372 PMCID: PMC8309686 DOI: 10.1093/gastro/goaa078] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 03/25/2020] [Accepted: 05/13/2020] [Indexed: 01/10/2023] Open
Abstract
Background A colonoscopy can detect colorectal diseases, including cancers, polyps, and inflammatory bowel diseases. A computer-aided diagnosis (CAD) system using deep convolutional neural networks (CNNs) that can recognize anatomical locations during a colonoscopy could efficiently assist practitioners. We aimed to construct a CAD system using a CNN to distinguish colorectal images from parts of the cecum, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum. Method We constructed a CNN by training of 9,995 colonoscopy images and tested its performance by 5,121 independent colonoscopy images that were categorized according to seven anatomical locations: the terminal ileum, the cecum, ascending colon to transverse colon, descending colon to sigmoid colon, the rectum, the anus, and indistinguishable parts. We examined images taken during total colonoscopy performed between January 2017 and November 2017 at a single center. We evaluated the concordance between the diagnosis by endoscopists and those by the CNN. The main outcomes of the study were the sensitivity and specificity of the CNN for the anatomical categorization of colonoscopy images. Results The constructed CNN recognized anatomical locations of colonoscopy images with the following areas under the curves: 0.979 for the terminal ileum; 0.940 for the cecum; 0.875 for ascending colon to transverse colon; 0.846 for descending colon to sigmoid colon; 0.835 for the rectum; and 0.992 for the anus. During the test process, the CNN system correctly recognized 66.6% of images. Conclusion We constructed the new CNN system with clinically relevant performance for recognizing anatomical locations of colonoscopy images, which is the first step in constructing a CAD system that will support us during colonoscopy and provide an assurance of the quality of the colonoscopy procedure.
Collapse
Affiliation(s)
- Hiroaki Saito
- Department of Gastroenterology, Sendai Kousei Hospital, Miyagi, Japan
| | | | - Tsuyoshi Ozawa
- Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan.,Department of Surgery, Teikyo University School of Medicine, Tokyo, Japan
| | - Soichiro Ishihara
- Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan.,Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Mitsuhiro Fujishiro
- Department of Gastroenterology and Hepatology, Nagoya University Graduate School of Medicine, Aichi, Japan
| | - Satoki Shichijo
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Dai Hirasawa
- Department of Gastroenterology, Sendai Kousei Hospital, Miyagi, Japan
| | - Tomoki Matsuda
- Department of Gastroenterology, Sendai Kousei Hospital, Miyagi, Japan
| | - Yuma Endo
- AI Medical Service, Inc., Tokyo, Japan
| | - Tomohiro Tada
- Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan.,Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.,AI Medical Service, Inc., Tokyo, Japan
| |
Collapse
|
40
|
Wittenberg T, Raithel M. Artificial Intelligence-Based Polyp Detection in Colonoscopy: Where Have We Been, Where Do We Stand, and Where Are We Headed? Visc Med 2020; 36:428-438. [PMID: 33447598 PMCID: PMC7768101 DOI: 10.1159/000512438] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Accepted: 10/20/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND In the past, image-based computer-assisted diagnosis and detection systems have been driven mainly from the field of radiology, and more specifically mammography. Nevertheless, with the availability of large image data collections (known as the "Big Data" phenomenon) in correlation with developments from the domain of artificial intelligence (AI) and particularly so-called deep convolutional neural networks, computer-assisted detection of adenomas and polyps in real-time during screening colonoscopy has become feasible. SUMMARY With respect to these developments, the scope of this contribution is to provide a brief overview about the evolution of AI-based detection of adenomas and polyps during colonoscopy of the past 35 years, starting with the age of "handcrafted geometrical features" together with simple classification schemes, over the development and use of "texture-based features" and machine learning approaches, and ending with current developments in the field of deep learning using convolutional neural networks. In parallel, the need and necessity of large-scale clinical data will be discussed in order to develop such methods, up to commercially available AI products for automated detection of polyps (adenoma and benign neoplastic lesions). Finally, a short view into the future is made regarding further possibilities of AI methods within colonoscopy. KEY MESSAGES Research of image-based lesion detection in colonoscopy data has a 35-year-old history. Milestones such as the Paris nomenclature, texture features, big data, and deep learning were essential for the development and availability of commercial AI-based systems for polyp detection.
Collapse
|
41
|
Attardo S, Chandrasekar VT, Spadaccini M, Maselli R, Patel HK, Desai M, Capogreco A, Badalamenti M, Galtieri PA, Pellegatta G, Fugazza A, Carrara S, Anderloni A, Occhipinti P, Hassan C, Sharma P, Repici A. Artificial intelligence technologies for the detection of colorectal lesions: The future is now. World J Gastroenterol 2020; 26:5606-5616. [PMID: 33088155 PMCID: PMC7545398 DOI: 10.3748/wjg.v26.i37.5606] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 06/30/2020] [Accepted: 09/15/2020] [Indexed: 02/06/2023] Open
Abstract
Several studies have shown a significant adenoma miss rate up to 35% during screening colonoscopy, especially in patients with diminutive adenomas. The use of artificial intelligence (AI) in colonoscopy has been gaining popularity by helping endoscopists in polyp detection, with the aim to increase their adenoma detection rate (ADR) and polyp detection rate (PDR) in order to reduce the incidence of interval cancers. The efficacy of deep convolutional neural network (DCNN)-based AI system for polyp detection has been trained and tested in ex vivo settings such as colonoscopy still images or videos. Recent trials have evaluated the real-time efficacy of DCNN-based systems showing promising results in term of improved ADR and PDR. In this review we reported data from the preliminary ex vivo experiences and summarized the results of the initial randomized controlled trials.
Collapse
Affiliation(s)
- Simona Attardo
- Department of Endoscopy and Digestive Disease, AOU Maggiore della Carità, Novara 28100, Italy
| | | | - Marco Spadaccini
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| | - Roberta Maselli
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Harsh K Patel
- Department of Internal Medicine, Ochsner Clinic Foundation, New Orleans, LA 70124, United States
| | - Madhav Desai
- Department of Gastroenterology and Hepatology, Kansas City VA Medical Center, Kansas City, MO 66045, United States
| | - Antonio Capogreco
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| | - Matteo Badalamenti
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | | | - Gaia Pellegatta
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Alessandro Fugazza
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Silvia Carrara
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Andrea Anderloni
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Pietro Occhipinti
- Department of Endoscopy and Digestive Disease, AOU Maggiore della Carità, Novara 28100, Italy
| | - Cesare Hassan
- Endoscopy Unit, Nuovo Regina Margherita Hospital, Roma 00153, Italy
| | - Prateek Sharma
- Department of Gastroenterology and Hepatology, Kansas City VA Medical Center, Kansas City, MO 66045, United States
| | - Alessandro Repici
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| |
Collapse
|
42
|
Sierra F, Gutierrez Y, Martinez F. An online deep convolutional polyp lesion prediction over Narrow Band Imaging (NBI). ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:2412-2415. [PMID: 33018493 DOI: 10.1109/embc44109.2020.9176534] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Polyps, represented as abnormal protuberances along intestinal track, are the main biomarker to diagnose gastrointestinal cancer. During routine colonoscopies such polyps are localized and coarsely characterized according to microvascular and surface textural patterns. Narrow-band imaging (NBI) sequences have emerged as complementary technique to enhance description of suspicious mucosa surfaces according to blood vessels architectures. Nevertheless, a high number of misleading polyp characterization, together with expert dependency during evaluation, reduce the possibility of effective disease treatments. Additionally, challenges during colonoscopy, such as abrupt camera motions, changes of intensity and artifacts, difficult the diagnosis task. This work introduces a robust frame-level convolutional strategy with the capability to characterize and predict hyperplastic, adenomas and serrated polyps over NBI sequences. The proposed strategy was evaluated over a total of 76 videos achieving an average accuracy of 90,79% to distinguish among these three classes. Remarkably, the approach achieves a 100% of accuracy to differentiate intermediate serrated polyps, whose evaluation is challenging even for expert gastroenterologist. The approach was also favorable to support polyp resection decisions, achieving perfect score on evaluated dataset.Clinical relevance- The proposed approach supports observable hystological characterization of polyps during a routine colonoscopy avoiding misclassification of potential masses that could evolve in cancer.
Collapse
|
43
|
Rahim T, Usman MA, Shin SY. A survey on contemporary computer-aided tumor, polyp, and ulcer detection methods in wireless capsule endoscopy imaging. Comput Med Imaging Graph 2020; 85:101767. [DOI: 10.1016/j.compmedimag.2020.101767] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 07/13/2020] [Accepted: 07/18/2020] [Indexed: 12/12/2022]
|
44
|
Sánchez-Peralta LF, Bote-Curiel L, Picón A, Sánchez-Margallo FM, Pagador JB. Deep learning to find colorectal polyps in colonoscopy: A systematic literature review. Artif Intell Med 2020; 108:101923. [PMID: 32972656 DOI: 10.1016/j.artmed.2020.101923] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 03/03/2020] [Accepted: 07/01/2020] [Indexed: 02/07/2023]
Abstract
Colorectal cancer has a great incidence rate worldwide, but its early detection significantly increases the survival rate. Colonoscopy is the gold standard procedure for diagnosis and removal of colorectal lesions with potential to evolve into cancer and computer-aided detection systems can help gastroenterologists to increase the adenoma detection rate, one of the main indicators for colonoscopy quality and predictor for colorectal cancer prevention. The recent success of deep learning approaches in computer vision has also reached this field and has boosted the number of proposed methods for polyp detection, localization and segmentation. Through a systematic search, 35 works have been retrieved. The current systematic review provides an analysis of these methods, stating advantages and disadvantages for the different categories used; comments seven publicly available datasets of colonoscopy images; analyses the metrics used for reporting and identifies future challenges and recommendations. Convolutional neural networks are the most used architecture together with an important presence of data augmentation strategies, mainly based on image transformations and the use of patches. End-to-end methods are preferred over hybrid methods, with a rising tendency. As for detection and localization tasks, the most used metric for reporting is the recall, while Intersection over Union is highly used in segmentation. One of the major concerns is the difficulty for a fair comparison and reproducibility of methods. Even despite the organization of challenges, there is still a need for a common validation framework based on a large, annotated and publicly available database, which also includes the most convenient metrics to report results. Finally, it is also important to highlight that efforts should be focused in the future on proving the clinical value of the deep learning based methods, by increasing the adenoma detection rate.
Collapse
Affiliation(s)
| | - Luis Bote-Curiel
- Jesús Usón Minimally Invasive Surgery Centre, Ctra. N-521, km 41.8, 10071 Cáceres, Spain.
| | - Artzai Picón
- Tecnalia, Parque Científico y Tecnológico de Bizkaia, C/ Astondo bidea, Edificio 700, 48160 Derio, Spain.
| | | | - J Blas Pagador
- Jesús Usón Minimally Invasive Surgery Centre, Ctra. N-521, km 41.8, 10071 Cáceres, Spain.
| |
Collapse
|
45
|
Mostafiz R, Rahman MM, Uddin MS. Gastrointestinal polyp classification through empirical mode decomposition and neural features. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-2944-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
|
46
|
Zhang Y, Li F, Yuan F, Zhang K, Huo L, Dong Z, Lang Y, Zhang Y, Wang M, Gao Z, Qin Z, Shen L. Diagnosing chronic atrophic gastritis by gastroscopy using artificial intelligence. Dig Liver Dis 2020; 52:566-572. [PMID: 32061504 DOI: 10.1016/j.dld.2019.12.146] [Citation(s) in RCA: 66] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 12/28/2019] [Accepted: 12/31/2019] [Indexed: 12/11/2022]
Abstract
BACKGROUND The sensitivity of endoscopy in diagnosing chronic atrophic gastritis is only 42%, and multipoint biopsy, despite being more accurate, is not always available. AIMS This study aimed to construct a convolutional neural network to improve the diagnostic rate of chronic atrophic gastritis. METHODS We collected 5470 images of the gastric antrums of 1699 patients and labeled them with their pathological findings. Of these, 3042 images depicted atrophic gastritis and 2428 did not. We designed and trained a convolutional neural network-chronic atrophic gastritis model to diagnose atrophic gastritis accurately, verified by five-fold cross-validation. Moreover, the diagnoses of the deep learning model were compared with those of three experts. RESULTS The diagnostic accuracy, sensitivity, and specificity of the convolutional neural network-chronic atrophic gastritis model in diagnosing atrophic gastritis were 0.942, 0.945, and 0.940, respectively, which were higher than those of the experts. The detection rates of mild, moderate, and severe atrophic gastritis were 93%, 95%, and 99%, respectively. CONCLUSION Chronic atrophic gastritis could be diagnosed by gastroscopic images using the convolutional neural network-chronic atrophic gastritis model. This may greatly reduce the burden on endoscopy physicians, simplify diagnostic routines, and reduce costs for doctors and patients.
Collapse
Affiliation(s)
- Yaqiong Zhang
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Fengxia Li
- Department of Gastroenterology, Shanxi Provincial People's Hospital, Taiyuan, China.
| | - Fuqiang Yuan
- Baidu Online Network Technology (Beijing) Corporation, Beijing, China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Lijuan Huo
- Department of Gastroenterology, The First Hospital of Shanxi Medical University, Taiyuan, China
| | - Zichen Dong
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Yiming Lang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Yapeng Zhang
- Fenyang College of Shanxi Medical University, Fenyang, China
| | - Meihong Wang
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Zenghui Gao
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Zhenzhen Qin
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Leixue Shen
- School of Computer Science and Technology, Xidian University, Xi'an, China
| |
Collapse
|
47
|
Viscaino M, Maass JC, Delano PH, Torrente M, Stott C, Auat Cheein F. Computer-aided diagnosis of external and middle ear conditions: A machine learning approach. PLoS One 2020; 15:e0229226. [PMID: 32163427 PMCID: PMC7067442 DOI: 10.1371/journal.pone.0229226] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 01/31/2020] [Indexed: 12/27/2022] Open
Abstract
In medicine, a misdiagnosis or the absence of specialists can affect the patient’s health, leading to unnecessary tests and increasing the costs of healthcare. In particular, the lack of specialists in otolaryngology in third world countries forces patients to seek medical attention from general practitioners, whom might not have enough training and experience for making correct diagnosis in this field. To tackle this problem, we propose and test a computer-aided system based on machine learning models and image processing techniques for otoscopic examination, as a support for a more accurate diagnosis of ear conditions at primary care before specialist referral; in particular, for myringosclerosis, earwax plug, and chronic otitis media. To characterize the tympanic membrane and ear canal for each condition, we implemented three different feature extraction methods: color coherence vector, discrete cosine transform, and filter bank. We also considered three machine learning algorithms: support vector machine (SVM), k-nearest neighbor (k-NN) and decision trees to develop the ear condition predictor model. To conduct the research, our database included 160 images as testing set and 720 images as training and validation sets of 180 patients. We repeatedly trained the learning models using the training dataset and evaluated them using the validation dataset to thus obtain the best feature extraction method and learning model that produce the highest validation accuracy. The results showed that the SVM and k-NN presented the best performance followed by decision trees model. Finally, we performed a classification stage –i.e., diagnosis– using testing data, where the SVM model achieved an average classification accuracy of 93.9%, average sensitivity of 87.8%, average specificity of 95.9%, and average positive predictive value of 87.7%. The results show that this system might be used for general practitioners as a reference to make better decisions in the ear pathologies diagnosis.
Collapse
Affiliation(s)
- Michelle Viscaino
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso, Chile
| | - Juan C. Maass
- Interdisciplinary Program of Phisiology and Biophisics, Facultad de Medicina, Instituto de Ciencias Biomedicas, Universidad de Chile, Santiago, Chile
- Department of Otolaryngology, Hospital Clínico de la Universidad de Chile, Santiago, Chile
| | - Paul H. Delano
- Department of Neuroscience, Facultad de Medicina, Universidad de Chile, Santiago, Chile
- Department of Otolaryngology, Hospital Clínico de la Universidad de Chile, Santiago, Chile
| | - Mariela Torrente
- Department of Otolaryngology, Hospital Clínico de la Universidad de Chile, Santiago, Chile
| | - Carlos Stott
- Department of Otolaryngology, Hospital Clínico de la Universidad de Chile, Santiago, Chile
| | - Fernando Auat Cheein
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso, Chile
- * E-mail:
| |
Collapse
|
48
|
Hoerter N, Gross SA, Liang PS. Artificial Intelligence and Polyp Detection. CURRENT TREATMENT OPTIONS IN GASTROENTEROLOGY 2020; 18:120-136. [PMID: 31960282 PMCID: PMC7371513 DOI: 10.1007/s11938-020-00274-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
PURPOSE OF REVIEW This review highlights the history, recent advances, and ongoing challenges of artificial intelligence (AI) technology in colonic polyp detection. RECENT FINDINGS Hand-crafted AI algorithms have recently given way to convolutional neural networks with the ability to detect polyps in real-time. The first randomized controlled trial comparing an AI system to standard colonoscopy found a 9% increase in adenoma detection rate, but the improvement was restricted to polyps smaller than 10 mm and the results need validation. As this field rapidly evolves, important issues to consider include standardization of outcomes, dataset availability, real-world applications, and regulatory approval. SUMMARY AI has shown great potential for improving colonic polyp detection while requiring minimal training for endoscopists. The question of when AI will enter endoscopic practice depends on whether the technology can be integrated into existing hardware and an assessment of its added value for patient care.
Collapse
Affiliation(s)
| | | | - Peter S Liang
- NYU Langone Health, New York, NY, USA.
- VA New York Harbor Health Care System, New York, NY, USA.
| |
Collapse
|
49
|
Majid A, Khan MA, Yasmin M, Rehman A, Yousafzai A, Tariq U. Classification of stomach infections: A paradigm of convolutional neural network along with classical features fusion and selection. Microsc Res Tech 2020; 83:562-576. [PMID: 31984630 DOI: 10.1002/jemt.23447] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Revised: 12/28/2019] [Accepted: 01/13/2020] [Indexed: 12/11/2022]
Abstract
Automated detection and classification of gastric infections (i.e., ulcer, polyp, esophagitis, and bleeding) through wireless capsule endoscopy (WCE) is still a key challenge. Doctors can identify these endoscopic diseases by using the computer-aided diagnostic (CAD) systems. In this article, a new fully automated system is proposed for the recognition of gastric infections through multi-type features extraction, fusion, and robust features selection. Five key steps are performed-database creation, handcrafted and convolutional neural network (CNN) deep features extraction, a fusion of extracted features, selection of best features using a genetic algorithm (GA), and recognition. In the features extraction step, discrete cosine transform, discrete wavelet transform strong color feature, and VGG16-based CNN features are extracted. Later, these features are fused by simple array concatenation and GA is performed through which best features are selected based on K-Nearest Neighbor fitness function. In the last, best selected features are provided to Ensemble classifier for recognition of gastric diseases. A database is prepared using four datasets-Kvasir, CVC-ClinicDB, Private, and ETIS-LaribPolypDB with four types of gastric infections such as ulcer, polyp, esophagitis, and bleeding. Using this database, proposed technique performs better as compared to existing methods and achieves an accuracy of 96.5%.
Collapse
Affiliation(s)
- Abdul Majid
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Museum Road, Taxila, Rawalpindi, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Amjad Rehman
- AIDA Lab CCIS, Prince Sultan University Riyadh, Riyadh, Saudi Arabia
| | - Abdullah Yousafzai
- Department of Computer Science, HITEC University Museum Road, Taxila, Rawalpindi, Pakistan
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
50
|
Khan MA, Kadry S, Alhaisoni M, Nam Y, Zhang Y, Rajinikanth V, Sarfraz MS. Computer-Aided Gastrointestinal Diseases Analysis From Wireless Capsule Endoscopy: A Framework of Best Features Selection. IEEE ACCESS 2020; 8:132850-132859. [DOI: 10.1109/access.2020.3010448] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
|