1
|
Waheed Z, Gui J, Heyat MBB, Parveen S, Hayat MAB, Iqbal MS, Aya Z, Nawabi AK, Sawan M. A novel lightweight deep learning based approaches for the automatic diagnosis of gastrointestinal disease using image processing and knowledge distillation techniques. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 260:108579. [PMID: 39798279 DOI: 10.1016/j.cmpb.2024.108579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 12/16/2024] [Accepted: 12/29/2024] [Indexed: 01/15/2025]
Abstract
BACKGROUND Gastrointestinal (GI) diseases pose significant challenges for healthcare systems, largely due to the complexities involved in their detection and treatment. Despite the advancements in deep neural networks, their high computational demands hinder their practical use in clinical environments. OBJECTIVE This study aims to address the computational inefficiencies of deep neural networks by proposing a lightweight model that integrates model compression techniques, ConvLSTM layers, and ConvNext Blocks, all optimized through Knowledge Distillation (KD). METHODS A dataset of 6000 endoscopic images of various GI diseases was utilized. Advanced image preprocessing techniques, including adaptive noise reduction and image detail enhancement, were employed to improve accuracy and interpretability. The model's performance was assessed in terms of accuracy, computational cost, and disk space usage. RESULTS The proposed lightweight model achieved an exceptional overall accuracy of 99.38 %. It operates efficiently with a computational cost of 0.61 GFLOPs and occupies only 3.09 MB of disk space. Additionally, Grad-CAM visualizations demonstrated enhanced model saliency and interpretability, offering insights into the decision-making process of the model post-KD. CONCLUSION The proposed model represents a significant advancement in the diagnosis of GI diseases. It provides a cost-effective and efficient alternative to traditional deep neural network methods, overcoming their computational limitations and contributing valuable insights for improved clinical application.
Collapse
Affiliation(s)
- Zafran Waheed
- School of Computer Science and Engineering, Central South University, China.
| | - Jinsong Gui
- School of Electronic Information, Central South University, China.
| | - Md Belal Bin Heyat
- CenBRAIN Neurotech Center of Excellence, School of Engineering, Westlake University, Zhejiang, Hangzhou, China.
| | - Saba Parveen
- College of Electronics and Information Engineering, Shenzhen University, Shenzhen, China
| | - Mohd Ammar Bin Hayat
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, China
| | - Muhammad Shahid Iqbal
- Department of Computer Science and Information Technology, Women University of Azad Jammu & Kashmir, Pakistan
| | - Zouheir Aya
- College of Mechanical Engineering, Changsha University of Science and Technology, Changsha, Hunan, China
| | - Awais Khan Nawabi
- Department of Electronics, Computer science and Electrical Engineering, University of Pavia, Italy
| | - Mohamad Sawan
- CenBRAIN Neurotech Center of Excellence, School of Engineering, Westlake University, Zhejiang, Hangzhou, China
| |
Collapse
|
2
|
Albuquerque C, Henriques R, Castelli M. Deep learning-based object detection algorithms in medical imaging: Systematic review. Heliyon 2025; 11:e41137. [PMID: 39758372 PMCID: PMC11699422 DOI: 10.1016/j.heliyon.2024.e41137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 12/04/2024] [Accepted: 12/10/2024] [Indexed: 01/06/2025] Open
Abstract
Over the past decade, Deep Learning (DL) techniques have demonstrated remarkable advancements across various domains, driving their widespread adoption. Particularly in medical image analysis, DL received greater attention for tasks like image segmentation, object detection, and classification. This paper provides an overview of DL-based object recognition in medical images, exploring recent methods and emphasizing different imaging techniques and anatomical applications. Utilizing a meticulous quantitative and qualitative analysis following PRISMA guidelines, we examined publications based on citation rates to explore into the utilization of DL-based object detectors across imaging modalities and anatomical domains. Our findings reveal a consistent rise in the utilization of DL-based object detection models, indicating unexploited potential in medical image analysis. Predominantly within Medicine and Computer Science domains, research in this area is most active in the US, China, and Japan. Notably, DL-based object detection methods have gotten significant interest across diverse medical imaging modalities and anatomical domains. These methods have been applied to a range of techniques including CR scans, pathology images, and endoscopic imaging, showcasing their adaptability. Moreover, diverse anatomical applications, particularly in digital pathology and microscopy, have been explored. The analysis underscores the presence of varied datasets, often with significant discrepancies in size, with a notable percentage being labeled as private or internal, and with prospective studies in this field remaining scarce. Our review of existing trends in DL-based object detection in medical images offers insights for future research directions. The continuous evolution of DL algorithms highlighted in the literature underscores the dynamic nature of this field, emphasizing the need for ongoing research and fitted optimization for specific applications.
Collapse
|
3
|
Kusters CHJ, Jaspers TJM, Boers TGW, Jong MR, Jukema JB, Fockens KN, de Groof AJ, Bergman JJ, van der Sommen F, De With PHN. Will Transformers change gastrointestinal endoscopic image analysis? A comparative analysis between CNNs and Transformers, in terms of performance, robustness and generalization. Med Image Anal 2025; 99:103348. [PMID: 39298861 DOI: 10.1016/j.media.2024.103348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 07/10/2024] [Accepted: 09/10/2024] [Indexed: 09/22/2024]
Abstract
Gastrointestinal endoscopic image analysis presents significant challenges, such as considerable variations in quality due to the challenging in-body imaging environment, the often-subtle nature of abnormalities with low interobserver agreement, and the need for real-time processing. These challenges pose strong requirements on the performance, generalization, robustness and complexity of deep learning-based techniques in such safety-critical applications. While Convolutional Neural Networks (CNNs) have been the go-to architecture for endoscopic image analysis, recent successes of the Transformer architecture in computer vision raise the possibility to update this conclusion. To this end, we evaluate and compare clinically relevant performance, generalization and robustness of state-of-the-art CNNs and Transformers for neoplasia detection in Barrett's esophagus. We have trained and validated several top-performing CNNs and Transformers on a total of 10,208 images (2,079 patients), and tested on a total of 7,118 images (998 patients) across multiple test sets, including a high-quality test set, two internal and two external generalization test sets, and a robustness test set. Furthermore, to expand the scope of the study, we have conducted the performance and robustness comparisons for colonic polyp segmentation (Kvasir-SEG) and angiodysplasia detection (Giana). The results obtained for featured models across a wide range of training set sizes demonstrate that Transformers achieve comparable performance as CNNs on various applications, show comparable or slightly improved generalization capabilities and offer equally strong resilience and robustness against common image corruptions and perturbations. These findings confirm the viability of the Transformer architecture, particularly suited to the dynamic nature of endoscopic video analysis, characterized by fluctuating image quality, appearance and equipment configurations in transition from hospital to hospital. The code is made publicly available at: https://github.com/BONS-AI-VCA-AMC/Endoscopy-CNNs-vs-Transformers.
Collapse
Affiliation(s)
- Carolus H J Kusters
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Tim J M Jaspers
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Tim G W Boers
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Martijn R Jong
- Department of Gastroenterology and Hepatology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Jelmer B Jukema
- Department of Gastroenterology and Hepatology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Kiki N Fockens
- Department of Gastroenterology and Hepatology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Albert J de Groof
- Department of Gastroenterology and Hepatology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Jacques J Bergman
- Department of Gastroenterology and Hepatology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Peter H N De With
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
4
|
Wohl P, Krausova A, Wohl P, Fabian O, Bajer L, Brezina J, Drastich P, Hlavaty M, Novotna P, Kahle M, Spicak J, Gregor M. Limited validity of Mayo endoscopic subscore in ulcerative colitis with concomitant primary sclerosing cholangitis. World J Gastrointest Endosc 2024; 16:607-616. [PMID: 39600557 PMCID: PMC11586720 DOI: 10.4253/wjge.v16.i11.607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Revised: 09/13/2024] [Accepted: 10/09/2024] [Indexed: 10/30/2024] Open
Abstract
BACKGROUND Ulcerative colitis (UC) with concomitant primary sclerosing cholangitis (PSC) represents a distinct disease entity (PSC-UC). Mayo endoscopic subscore (MES) is a standard tool for assessing disease activity in UC but its relevance in PSC-UC remains unclear. AIM To assess the accuracy of MES in UC and PSC-UC patients, we performed histological scoring using Nancy histological index (NHI). METHODS MES was assessed in 30 PSC-UC and 29 UC adult patients during endoscopy. NHI and inflammation were evaluated in biopsies from the cecum, rectum, and terminal ileum. In addition, perinuclear anti-neutrophil cytoplasmic antibodies, fecal calprotectin, body mass index, and other relevant clinical characteristics were collected. RESULTS The median MES and NHI were similar for UC patients (MES grade 2 and NHI grade 2 in the rectum) but were different for PSC-UC patients (MES grade 0 and NHI grade 2 in the cecum). There was a correlation between MES and NHI for UC patients (Spearman's r = 0.40, P = 0.029) but not for PSC-UC patients. Histopathological examination revealed persistent microscopic inflammation in 88% of PSC-UC patients with MES grade 0 (46% of all PSC-UC patients). Moreover, MES overestimated the severity of active inflammation in an additional 11% of PSC-UC patients. CONCLUSION MES insufficiently identifies microscopic inflammation in PSC-UC. This indicates that histological evaluation should become a routine procedure of the diagnostic and grading system in both PSC-UC and PSC.
Collapse
Affiliation(s)
- Pavel Wohl
- Department of Gastroenterology, Institute for Clinical and Experimental Medicine, Prague 14021, Czech Republic
| | - Alzbeta Krausova
- Department of Integrative Biology, Institute of Molecular Genetics of the Czech Academy of Sciences, Prague 14220, Czech Republic
| | - Petr Wohl
- Department of Metabolism and Diabetes, Institute for Clinical and Experimental Medicine, Prague 14021, Czech Republic
| | - Ondrej Fabian
- Clinical and Transplant Pathology Centre, Institute for Clinical and Experimental Medicine, Prague 14021, Czech Republic
| | - Lukas Bajer
- Department of Gastroenterology, Institute for Clinical and Experimental Medicine, Prague 14021, Czech Republic
| | - Jan Brezina
- Department of Gastroenterology, Institute for Clinical and Experimental Medicine, Prague 14021, Czech Republic
| | - Pavel Drastich
- Department of Gastroenterology, Institute for Clinical and Experimental Medicine, Prague 14021, Czech Republic
| | - Mojmir Hlavaty
- Department of Gastroenterology, Institute for Clinical and Experimental Medicine, Prague 14021, Czech Republic
| | - Petra Novotna
- Department of Integrative Biology, Institute of Molecular Genetics of the Czech Academy of Sciences, Prague 14220, Czech Republic
| | - Michal Kahle
- Department of Data Analysis, Statistics and Artificial Intelligence, Institute for Clinical and Experimental Medicine, Prague 14021, Czech Republic
| | - Julius Spicak
- Department of Gastroenterology, Institute for Clinical and Experimental Medicine, Prague 14021, Czech Republic
| | - Martin Gregor
- Department of Integrative Biology, Institute of Molecular Genetics of the Czech Academy of Sciences, Prague 14220, Czech Republic
| |
Collapse
|
5
|
Lee L, Lin C, Hsu CJ, Lin HH, Lin TC, Liu YH, Hu JM. Applying Deep-Learning Algorithm Interpreting Kidney, Ureter, and Bladder (KUB) X-Rays to Detect Colon Cancer. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01309-1. [PMID: 39482492 DOI: 10.1007/s10278-024-01309-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Revised: 09/26/2024] [Accepted: 10/14/2024] [Indexed: 11/03/2024]
Abstract
Early screening is crucial in reducing the mortality of colorectal cancer (CRC). Current screening methods, including fecal occult blood tests (FOBT) and colonoscopy, are primarily limited by low patient compliance and the invasive nature of the procedures. Several advanced imaging techniques such as computed tomography (CT) and histological imaging have been integrated with artificial intelligence (AI) to enhance the detection of CRC. There are still limitations because of the challenges associated with image acquisition and the cost. Kidney, ureter, and bladder (KUB) radiograph which is inexpensive and widely used for abdominal assessments in emergency settings and shows potential for detecting CRC when enhanced using advanced techniques. This study aimed to develop a deep learning model (DLM) to detect CRC using KUB radiographs. This retrospective study was conducted using data from the Tri-Service General Hospital (TSGH) between January 2011 and December 2020, including patients with at least one KUB radiograph. Patients were divided into development (n = 28,055), tuning (n = 11,234), and internal validation (n = 16,875) sets. An additional 15,876 patients were collected from a community hospital as the external validation set. A 121-layer DenseNet convolutional network was trained to classify KUB images for CRC detection. The model performance was evaluated using receiver operating characteristic curves, with sensitivity, specificity, and area under the curve (AUC) as metrics. The AUC, sensitivity, and specificity of the DLM in the internal and external validation sets achieved 0.738, 61.3%, and 74.4%, as well as 0.656, 47.7%, and 72.9%, respectively. The model performed better for high-grade CRC, with AUCs of 0.744 and 0.674 in the internal and external sets, respectively. Stratified analysis showed superior performance in females aged 55-64 with high-grade cancers. AI-positive predictions were associated with a higher long-term risk of all-cause mortality in both validation cohorts. AI-enhanced KUB X-ray analysis can enhance CRC screening coverage and effectiveness, providing a cost-effective alternative to traditional methods. Further prospective studies are necessary to validate these findings and fully integrate this technology into clinical practice.
Collapse
Affiliation(s)
- Ling Lee
- School of Medicine, National Defense Medical Center, Taipei, R.O.C, Taiwan
| | - Chin Lin
- School of Medicine, National Defense Medical Center, Taipei, R.O.C, Taiwan
- Military Digital Medical Center, Tri-Service General Hospital, National Defense Medical Center, Taipei, R.O.C, Taiwan
- School of Public Health, National Defense Medical Center, Taipei, R.O.C, Taiwan
| | - Chia-Jung Hsu
- School of Public Health, National Defense Medical Center, Taipei, R.O.C, Taiwan
- Medical Informatics Office, Tri-Service General Hospital, National Defense Medical Center, Taipei, R.O.C, Taiwan
| | - Heng-Hsiu Lin
- School of Public Health, National Defense Medical Center, Taipei, R.O.C, Taiwan
- Medical Informatics Office, Tri-Service General Hospital, National Defense Medical Center, Taipei, R.O.C, Taiwan
| | - Tzu-Chiao Lin
- Division of Colorectal Surgery, Department of Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, R.O.C, Taiwan
| | - Yu-Hong Liu
- Division of Colorectal Surgery, Department of Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, R.O.C, Taiwan
| | - Je-Ming Hu
- School of Medicine, National Defense Medical Center, Taipei, R.O.C, Taiwan.
- Division of Colorectal Surgery, Department of Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, R.O.C, Taiwan.
- Graduate Institute of Medical Sciences, National Defense Medical Center, No 325, Section 2, Cheng-Kung Road, Neihu 114, Taipei, R.O.C, Taiwan.
| |
Collapse
|
6
|
Wang L, Wan J, Meng X, Chen B, Shao W. MCH-PAN: gastrointestinal polyp detection model integrating multi-scale feature information. Sci Rep 2024; 14:23382. [PMID: 39379452 PMCID: PMC11461898 DOI: 10.1038/s41598-024-74609-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 09/27/2024] [Indexed: 10/10/2024] Open
Abstract
The rise of object detection models has brought new breakthroughs to the development of clinical decision support systems. However, in the field of gastrointestinal polyp detection, there are still challenges such as uncertainty in polyp identification and inadequate coping with polyp scale variations. To address these challenges, this paper proposes a novel gastrointestinal polyp object detection model. The model can automatically identify polyp regions in gastrointestinal images and accurately label them. In terms of design, the model integrates multi-channel information to enhance the ability and robustness of channel feature expression, thus better coping with the complexity of polyp structures. At the same time, a hierarchical structure is constructed in the model to enhance the model's adaptability to multi-scale targets, effectively addressing the problem of large-scale variations in polyps. Furthermore, a channel attention mechanism is designed in the model to improve the accuracy of target positioning and reduce uncertainty in diagnosis. By integrating these strategies, the proposed gastrointestinal polyp object detection model can achieve accurate polyp detection, providing clinicians with reliable and valuable references. Experimental results show that the model exhibits superior performance in gastrointestinal polyp detection, which helps improve the diagnostic level of digestive system diseases and provides useful references for related research fields.
Collapse
Affiliation(s)
- Ling Wang
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China.
| | - Jingjing Wan
- Department of Gastroenterology, The Second People's Hospital of Huai'an, The Affiliated Huai'an Hospital of Xuzhou Medical University, Huaian, 223002, China.
| | - Xianchun Meng
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| | - Bolun Chen
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| | - Wei Shao
- Nanjing University of Aeronautics and Astronautics Shenzhen Research Institute, Shenzhen, 518038, China.
| |
Collapse
|
7
|
Enslin S, Kaul V. Past, Present, and Future. Gastrointest Endosc Clin N Am 2024. [DOI: 10.1016/j.giec.2024.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
8
|
Thijssen A, Schreuder RM, Dehghani N, Schor M, de With PH, van der Sommen F, Boonstra JJ, Moons LM, Schoon EJ. Improving the endoscopic recognition of early colorectal carcinoma using artificial intelligence: current evidence and future directions. Endosc Int Open 2024; 12:E1102-E1117. [PMID: 39398448 PMCID: PMC11466514 DOI: 10.1055/a-2403-3103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 08/21/2024] [Indexed: 10/15/2024] Open
Abstract
Background and study aims Artificial intelligence (AI) has great potential to improve endoscopic recognition of early stage colorectal carcinoma (CRC). This scoping review aimed to summarize current evidence on this topic, provide an overview of the methodologies currently used, and guide future research. Methods A systematic search was performed following the PRISMA-Scr guideline. PubMed (including Medline), Scopus, Embase, IEEE Xplore, and ACM Digital Library were searched up to January 2024. Studies were eligible for inclusion when using AI for distinguishing CRC from colorectal polyps on endoscopic imaging, using histopathology as gold standard, reporting sensitivity, specificity, or accuracy as outcomes. Results Of 5024 screened articles, 26 were included. Computer-aided diagnosis (CADx) system classification categories ranged from two categories, such as lesions suitable or unsuitable for endoscopic resection, to five categories, such as hyperplastic polyp, sessile serrated lesion, adenoma, cancer, and other. The number of images used in testing databases varied from 69 to 84,585. Diagnostic performances were divergent, with sensitivities varying from 55.0% to 99.2%, specificities from 67.5% to 100% and accuracies from 74.4% to 94.4%. Conclusions This review highlights that using AI to improve endoscopic recognition of early stage CRC is an upcoming research field. We introduced a suggestions list of essential subjects to report in research regarding the development of endoscopy CADx systems, aiming to facilitate more complete reporting and better comparability between studies. There is a knowledge gap regarding real-time CADx system performance during multicenter external validation. Future research should focus on development of CADx systems that can differentiate CRC from premalignant lesions, while providing an indication of invasion depth.
Collapse
Affiliation(s)
- Ayla Thijssen
- GROW Research Institute for Oncology and Reproduction, Maastricht University, Maastricht, Netherlands
- Department of Gastroenterology and Hepatology, Maastricht Universitair Medisch Centrum+, Maastricht, Netherlands
| | - Ramon-Michel Schreuder
- GROW Research Institute for Oncology and Reproduction, Maastricht University, Maastricht, Netherlands
- Department of Gastroenterology and Hepatology, Catharina Hospital, Eindhoven, Netherlands
| | - Nikoo Dehghani
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Marieke Schor
- University Library, Department of Education and Support, Maastricht University, Maastricht, Netherlands
| | - Peter H.N. de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Jurjen J. Boonstra
- Department of Gastroenterology and Hepatology, Leids Universitair Medisch Centrum, Leiden, Netherlands
| | - Leon M.G. Moons
- Department of Gastroenterology and Hepatology, University Medical Center Utrecht, Utrecht, Netherlands
| | - Erik J. Schoon
- GROW Research Institute for Oncology and Reproduction, Maastricht University, Maastricht, Netherlands
- Department of Gastroenterology and Hepatology, Catharina Hospital, Eindhoven, Netherlands
| |
Collapse
|
9
|
Mota J, Almeida MJ, Martins M, Mendes F, Cardoso P, Afonso J, Ribeiro T, Ferreira J, Fonseca F, Limbert M, Lopes S, Macedo G, Castro Poças F, Mascarenhas M. Artificial Intelligence in Coloproctology: A Review of Emerging Technologies and Clinical Applications. J Clin Med 2024; 13:5842. [PMID: 39407902 PMCID: PMC11477032 DOI: 10.3390/jcm13195842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2024] [Revised: 09/21/2024] [Accepted: 09/22/2024] [Indexed: 10/20/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a transformative tool across several specialties, namely gastroenterology, where it has the potential to optimize both diagnosis and treatment as well as enhance patient care. Coloproctology, due to its highly prevalent pathologies and tremendous potential to cause significant mortality and morbidity, has drawn a lot of attention regarding AI applications. In fact, its application has yielded impressive outcomes in various domains, colonoscopy being one prominent example, where it aids in the detection of polyps and early signs of colorectal cancer with high accuracy and efficiency. With a less explored path but equivalent promise, AI-powered capsule endoscopy ensures accurate and time-efficient video readings, already detecting a wide spectrum of anomalies. High-resolution anoscopy is an area that has been growing in interest in recent years, with efforts being made to integrate AI. There are other areas, such as functional studies, that are currently in the early stages, but evidence is expected to emerge soon. According to the current state of research, AI is anticipated to empower gastroenterologists in the decision-making process, paving the way for a more precise approach to diagnosing and treating patients. This review aims to provide the state-of-the-art use of AI in coloproctology while also reflecting on future directions and perspectives.
Collapse
Affiliation(s)
- Joana Mota
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (J.M.); (M.J.A.); (M.M.); (F.M.); (P.C.); (J.A.); (T.R.); (S.L.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
| | - Maria João Almeida
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (J.M.); (M.J.A.); (M.M.); (F.M.); (P.C.); (J.A.); (T.R.); (S.L.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
| | - Miguel Martins
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (J.M.); (M.J.A.); (M.M.); (F.M.); (P.C.); (J.A.); (T.R.); (S.L.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
| | - Francisco Mendes
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (J.M.); (M.J.A.); (M.M.); (F.M.); (P.C.); (J.A.); (T.R.); (S.L.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
| | - Pedro Cardoso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (J.M.); (M.J.A.); (M.M.); (F.M.); (P.C.); (J.A.); (T.R.); (S.L.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
| | - João Afonso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (J.M.); (M.J.A.); (M.M.); (F.M.); (P.C.); (J.A.); (T.R.); (S.L.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
| | - Tiago Ribeiro
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (J.M.); (M.J.A.); (M.M.); (F.M.); (P.C.); (J.A.); (T.R.); (S.L.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
| | - João Ferreira
- Department of Mechanical Engineering, Faculty of Engineering, University of Porto, 4200-065 Porto, Portugal;
- DigestAID—Digestive Artificial Intelligence Development, Rua Alfredo Allen n.° 455/461, 4200-135 Porto, Portugal
| | - Filipa Fonseca
- Instituto Português de Oncologia de Lisboa Francisco Gentil (IPO Lisboa), 1099-023 Lisboa, Portugal; (F.F.); (M.L.)
| | - Manuel Limbert
- Instituto Português de Oncologia de Lisboa Francisco Gentil (IPO Lisboa), 1099-023 Lisboa, Portugal; (F.F.); (M.L.)
- Artificial Intelligence Group of the Portuguese Society of Coloproctology, 1050-117 Lisboa, Portugal;
| | - Susana Lopes
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (J.M.); (M.J.A.); (M.M.); (F.M.); (P.C.); (J.A.); (T.R.); (S.L.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
- Artificial Intelligence Group of the Portuguese Society of Coloproctology, 1050-117 Lisboa, Portugal;
- Faculty of Medicine, University of Porto, 4200-047 Porto, Portugal
| | - Guilherme Macedo
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (J.M.); (M.J.A.); (M.M.); (F.M.); (P.C.); (J.A.); (T.R.); (S.L.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-047 Porto, Portugal
| | - Fernando Castro Poças
- Artificial Intelligence Group of the Portuguese Society of Coloproctology, 1050-117 Lisboa, Portugal;
- Department of Gastroenterology, Santo António University Hospital, 4099-001 Porto, Portugal
- Abel Salazar Biomedical Sciences Institute (ICBAS), 4050-313 Porto, Portugal
| | - Miguel Mascarenhas
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (J.M.); (M.J.A.); (M.M.); (F.M.); (P.C.); (J.A.); (T.R.); (S.L.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
- Artificial Intelligence Group of the Portuguese Society of Coloproctology, 1050-117 Lisboa, Portugal;
- Faculty of Medicine, University of Porto, 4200-047 Porto, Portugal
| |
Collapse
|
10
|
Tai J, Han M, Choi BY, Kang SH, Kim H, Kwak J, Lee D, Lee TH, Cho Y, Kim TH. Deep learning model for differentiating nasal cavity masses based on nasal endoscopy images. BMC Med Inform Decis Mak 2024; 24:145. [PMID: 38811961 PMCID: PMC11138030 DOI: 10.1186/s12911-024-02517-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 04/17/2024] [Indexed: 05/31/2024] Open
Abstract
BACKGROUND Nasal polyps and inverted papillomas often look similar. Clinically, it is difficult to distinguish the masses by endoscopic examination. Therefore, in this study, we aimed to develop a deep learning algorithm for computer-aided diagnosis of nasal endoscopic images, which may provide a more accurate clinical diagnosis before pathologic confirmation of the nasal masses. METHODS By performing deep learning of nasal endoscope images, we evaluated our computer-aided diagnosis system's assessment ability for nasal polyps and inverted papilloma and the feasibility of their clinical application. We used curriculum learning pre-trained with patches of nasal endoscopic images and full-sized images. The proposed model's performance for classifying nasal polyps, inverted papilloma, and normal tissue was analyzed using five-fold cross-validation. RESULTS The normal scores for our best-performing network were 0.9520 for recall, 0.7900 for precision, 0.8648 for F1-score, 0.97 for the area under the curve, and 0.8273 for accuracy. For nasal polyps, the best performance was 0.8162, 0.8496, 0.8409, 0.89, and 0.8273, respectively, for recall, precision, F1-score, area under the curve, and accuracy. Finally, for inverted papilloma, the best performance was obtained for recall, precision, F1-score, area under the curve, and accuracy values of 0.5172, 0.8125, 0.6122, 0.83, and 0.8273, respectively. CONCLUSION Although there were some misclassifications, the results of gradient-weighted class activation mapping were generally consistent with the areas under the curve determined by otolaryngologists. These results suggest that the convolutional neural network is highly reliable in resolving lesion locations in nasal endoscopic images.
Collapse
Affiliation(s)
- Junhu Tai
- Department of Otorhinolaryngology-Head & Neck Surgery, College of Medicine, Korea University, Seoul, Republic of Korea
| | - Munsoo Han
- Department of Otorhinolaryngology-Head & Neck Surgery, College of Medicine, Korea University, Seoul, Republic of Korea
- Mucosal Immunology Institute, College of Medicine, Korea University, Seoul, Republic of Korea
| | - Bo Yoon Choi
- Department of Otorhinolaryngology-Head & Neck Surgery, College of Medicine, Korea University, Seoul, Republic of Korea
| | - Sung Hoon Kang
- Department of Otorhinolaryngology-Head & Neck Surgery, College of Medicine, Korea University, Seoul, Republic of Korea
| | - Hyeongeun Kim
- Department of Otorhinolaryngology-Head & Neck Surgery, College of Medicine, Korea University, Seoul, Republic of Korea
| | - Jiwon Kwak
- Department of Otorhinolaryngology-Head & Neck Surgery, College of Medicine, Korea University, Seoul, Republic of Korea
| | - Dabin Lee
- Department of Otorhinolaryngology-Head & Neck Surgery, College of Medicine, Korea University, Seoul, Republic of Korea
| | - Tae Hoon Lee
- Department of Otorhinolaryngology-Head & Neck Surgery, College of Medicine, Korea University, Seoul, Republic of Korea
| | - Yongwon Cho
- Department of Radiology and AI center, College of Medicine, Korea University, Seoul, Republic of Korea.
- Department of Computer Science and Engineering, Soonchunhyang University, Cheonan-Asan, Republic of Korea.
| | - Tae Hoon Kim
- Department of Otorhinolaryngology-Head & Neck Surgery, College of Medicine, Korea University, Seoul, Republic of Korea.
- Mucosal Immunology Institute, College of Medicine, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
11
|
Jaspers TJM, Boers TGW, Kusters CHJ, Jong MR, Jukema JB, de Groof AJ, Bergman JJ, de With PHN, van der Sommen F. Robustness evaluation of deep neural networks for endoscopic image analysis: Insights and strategies. Med Image Anal 2024; 94:103157. [PMID: 38574544 DOI: 10.1016/j.media.2024.103157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 03/19/2024] [Accepted: 03/21/2024] [Indexed: 04/06/2024]
Abstract
Computer-aided detection and diagnosis systems (CADe/CADx) in endoscopy are commonly trained using high-quality imagery, which is not representative for the heterogeneous input typically encountered in clinical practice. In endoscopy, the image quality heavily relies on both the skills and experience of the endoscopist and the specifications of the system used for screening. Factors such as poor illumination, motion blur, and specific post-processing settings can significantly alter the quality and general appearance of these images. This so-called domain gap between the data used for developing the system and the data it encounters after deployment, and the impact it has on the performance of deep neural networks (DNNs) supportive endoscopic CAD systems remains largely unexplored. As many of such systems, for e.g. polyp detection, are already being rolled out in clinical practice, this poses severe patient risks in particularly community hospitals, where both the imaging equipment and experience are subject to considerable variation. Therefore, this study aims to evaluate the impact of this domain gap on the clinical performance of CADe/CADx for various endoscopic applications. For this, we leverage two publicly available data sets (KVASIR-SEG and GIANA) and two in-house data sets. We investigate the performance of commonly-used DNN architectures under synthetic, clinically calibrated image degradations and on a prospectively collected dataset including 342 endoscopic images of lower subjective quality. Additionally, we assess the influence of DNN architecture and complexity, data augmentation, and pretraining techniques for improved robustness. The results reveal a considerable decline in performance of 11.6% (±1.5) as compared to the reference, within the clinically calibrated boundaries of image degradations. Nevertheless, employing more advanced DNN architectures and self-supervised in-domain pre-training effectively mitigate this drop to 7.7% (±2.03). Additionally, these enhancements yield the highest performance on the manually collected test set including images with lower subjective quality. By comprehensively assessing the robustness of popular DNN architectures and training strategies across multiple datasets, this study provides valuable insights into their performance and limitations for endoscopic applications. The findings highlight the importance of including robustness evaluation when developing DNNs for endoscopy applications and propose strategies to mitigate performance loss.
Collapse
Affiliation(s)
- Tim J M Jaspers
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Tim G W Boers
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Carolus H J Kusters
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Martijn R Jong
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Jelmer B Jukema
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Albert J de Groof
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Jacques J Bergman
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
12
|
Guo F, Meng H. Application of artificial intelligence in gastrointestinal endoscopy. Arab J Gastroenterol 2024; 25:93-96. [PMID: 38228443 DOI: 10.1016/j.ajg.2023.12.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 09/06/2023] [Accepted: 12/30/2023] [Indexed: 01/18/2024]
Abstract
Endoscopy is an important method for diagnosing gastrointestinal (GI) diseases. In this study, we provide an overview of the advances in artificial intelligence (AI) technology in the field of GI endoscopy over recent years, including esophagus, stomach, large intestine, and capsule endoscopy (small intestine). AI-assisted endoscopy shows high accuracy, sensitivity, and specificity in the detection and diagnosis of GI diseases at all levels. Hence, AI will make a breakthrough in the field of GI endoscopy in the near future. However, AI technology currently has some limitations and is still in the preclinical stages.
Collapse
Affiliation(s)
- Fujia Guo
- The first Affiliated Hospital, Dalian Medical University, Dalian 116044, China
| | - Hua Meng
- The first Affiliated Hospital, Dalian Medical University, Dalian 116044, China.
| |
Collapse
|
13
|
Sierra-Jerez F, Martinez F. A non-aligned translation with a neoplastic classifier regularization to include vascular NBI patterns in standard colonoscopies. Comput Biol Med 2024; 170:108008. [PMID: 38277922 DOI: 10.1016/j.compbiomed.2024.108008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 12/21/2023] [Accepted: 01/13/2024] [Indexed: 01/28/2024]
Abstract
Polyp vascular patterns are key to categorizing colorectal cancer malignancy. These patterns are typically observed in situ from specialized narrow-band images (NBI). Nonetheless, such vascular characterization is lost from standard colonoscopies (the primary attention mechanism). Besides, even for NBI observations, the categorization remains biased for expert observations, reporting errors in classification from 59.5% to 84.2%. This work introduces an end-to-end computational strategy to enhance in situ standard colonoscopy observations, including vascular patterns typically observed from NBI mechanisms. These retrieved synthetic images are achieved by adjusting a deep representation under a non-aligned translation task from optical colonoscopy (OC) to NBI. The introduced scheme includes an architecture to discriminate enhanced neoplastic patterns achieving a remarkable separation into the embedding representation. The proposed approach was validated in a public dataset with a total of 76 sequences, including standard optical sequences and the respective NBI observations. The enhanced optical sequences were automatically classified among adenomas and hyperplastic samples achieving an F1-score of 0.86%. To measure the sensibility capability of the proposed approach, serrated samples were projected to the trained architecture. In this experiment, statistical differences from three classes with a ρ-value <0.05 were reported, following a Mann-Whitney U test. This work showed remarkable polyp discrimination results in enhancing OC sequences regarding typical NBI patterns. This method also learns polyp class distributions under the unpaired criteria (close to real practice), with the capability to separate serrated samples from adenomas and hyperplastic ones.
Collapse
Affiliation(s)
- Franklin Sierra-Jerez
- Biomedical Imaging, Vision and Learning Laboratory (BIVL(2)ab), Universidad Industrial de Santander (UIS), Colombia
| | - Fabio Martinez
- Biomedical Imaging, Vision and Learning Laboratory (BIVL(2)ab), Universidad Industrial de Santander (UIS), Colombia.
| |
Collapse
|
14
|
Kato S, Kudo SE, Minegishi Y, Miyata Y, Maeda Y, Kuroki T, Takashina Y, Mochizuki K, Tamura E, Abe M, Sato Y, Sakurai T, Kouyama Y, Tanaka K, Ogawa Y, Nakamura H, Ichimasa K, Ogata N, Hisayuki T, Hayashi T, Wakamura K, Miyachi H, Baba T, Ishida F, Nemoto T, Misawa M. Impact of computer-aided characterization for diagnosis of colorectal lesions, including sessile serrated lesions: Multireader, multicase study. Dig Endosc 2024; 36:341-350. [PMID: 37937532 DOI: 10.1111/den.14612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 06/06/2023] [Indexed: 11/09/2023]
Abstract
OBJECTIVES Computer-aided characterization (CADx) may be used to implement optical biopsy strategies into colonoscopy practice; however, its impact on endoscopic diagnosis remains unknown. We aimed to evaluate the additional diagnostic value of CADx when used by endoscopists for assessing colorectal polyps. METHODS This was a single-center, multicase, multireader, image-reading study using randomly extracted images of pathologically confirmed polyps resected between July 2021 and January 2022. Approved CADx that could predict two-tier classification (neoplastic or nonneoplastic) by analyzing narrow-band images of the polyps was used to obtain a CADx diagnosis. Participating endoscopists determined if the polyps were neoplastic or not and noted their confidence level using a computer-based, image-reading test. The test was conducted twice with a 4-week interval: the first test was conducted without CADx prediction and the second test with CADx prediction. Diagnostic performances for neoplasms were calculated using the pathological diagnosis as reference and performances with and without CADx prediction were compared. RESULTS Five hundred polyps were randomly extracted from 385 patients and diagnosed by 14 endoscopists (including seven experts). The sensitivity for neoplasia was significantly improved by referring to CADx (89.4% vs. 95.6%). CADx also had incremental effects on the negative predictive value (69.3% vs. 84.3%), overall accuracy (87.2% vs. 91.8%), and high-confidence diagnosis rate (77.4% vs. 85.8%). However, there was no significant difference in specificity (80.1% vs. 78.9%). CONCLUSIONS Computer-aided characterization has added diagnostic value for differentiating colorectal neoplasms and may improve the high-confidence diagnosis rate.
Collapse
Affiliation(s)
- Shun Kato
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yosuke Minegishi
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuki Miyata
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yasuharu Maeda
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Takanori Kuroki
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuki Takashina
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Kenichi Mochizuki
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Eri Tamura
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Masahiro Abe
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuta Sato
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Tatsuya Sakurai
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuta Kouyama
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Kenta Tanaka
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yushi Ogawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hiroki Nakamura
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Katsuro Ichimasa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
- Department of Gastroenterology and Hepatology, National University Hospital, Singapore City, Singapore
| | - Noriyuki Ogata
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Tomokazu Hisayuki
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Takemasa Hayashi
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Kunihiko Wakamura
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hideyuki Miyachi
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toshiyuki Baba
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Fumio Ishida
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Tetsuo Nemoto
- Department of Diagnostic Pathology and Laboratory Medicine, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| |
Collapse
|
15
|
Chino A, Ide D, Abe S, Yoshinaga S, Ichimasa K, Kudo T, Ninomiya Y, Oka S, Tanaka S, Igarashi M. Performance evaluation of a computer-aided polyp detection system with artificial intelligence for colonoscopy. Dig Endosc 2024; 36:185-194. [PMID: 37099623 DOI: 10.1111/den.14578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Accepted: 04/25/2023] [Indexed: 04/27/2023]
Abstract
OBJECTIVES A computer-aided detection (CAD) system was developed to support the detection of colorectal lesions by deep learning using video images of lesions and normal mucosa recorded during colonoscopy. The study's purpose was to evaluate the stand-alone performance of this device under blinded conditions. METHODS This multicenter prospective observational study was conducted at four Japanese institutions. We used 326 videos of colonoscopies recorded with patient consent at institutions in which the Ethics Committees approved the study. The sensitivity of successful detection of the CAD system was calculated using the target lesions, which were detected by adjudicators from two facilities for each lesion appearance frame; inconsistencies were settled by consensus. Successful detection was defined as display of the detection flag on the lesion for more than 0.5 s within 3 s of appearance. RESULTS Of the 556 target lesions from 185 cases, detection success sensitivity was 97.5% (95% confidence interval [CI] 95.8-98.5%). The "successful detection sensitivity per colonoscopy" was 93% (95% CI 88.3-95.8%). For the frame-based sensitivity, specificity, positive predictive value, and negative predictive value were 86.6% (95% CI 84.8-88.4%), 84.7% (95% CI 83.8-85.6%), 34.9% (95% CI 32.3-37.4%), and 98.2% (95% CI 97.8-98.5%), respectively. TRIAL REGISTRATION University Hospital Medical Information Network (UMIN000044622).
Collapse
Affiliation(s)
- Akiko Chino
- Department of Gastroenterology, Cancer Institute Hospital of Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Daisuke Ide
- Department of Gastroenterology, Cancer Institute Hospital of Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Seiichiro Abe
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| | | | - Katsuro Ichimasa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toyoki Kudo
- Tokyo Endoscopic Clinic, Tokyo, Japan
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuki Ninomiya
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Shiro Oka
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Shinji Tanaka
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Masahiro Igarashi
- Department of Gastroenterology, Cancer Institute Hospital of Japanese Foundation for Cancer Research, Tokyo, Japan
| |
Collapse
|
16
|
Young E, Edwards L, Singh R. The Role of Artificial Intelligence in Colorectal Cancer Screening: Lesion Detection and Lesion Characterization. Cancers (Basel) 2023; 15:5126. [PMID: 37958301 PMCID: PMC10647850 DOI: 10.3390/cancers15215126] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/14/2023] [Accepted: 10/14/2023] [Indexed: 11/15/2023] Open
Abstract
Colorectal cancer remains a leading cause of cancer-related morbidity and mortality worldwide, despite the widespread uptake of population surveillance strategies. This is in part due to the persistent development of 'interval colorectal cancers', where patients develop colorectal cancer despite appropriate surveillance intervals, implying pre-malignant polyps were not resected at a prior colonoscopy. Multiple techniques have been developed to improve the sensitivity and accuracy of lesion detection and characterisation in an effort to improve the efficacy of colorectal cancer screening, thereby reducing the incidence of interval colorectal cancers. This article presents a comprehensive review of the transformative role of artificial intelligence (AI), which has recently emerged as one such solution for improving the quality of screening and surveillance colonoscopy. Firstly, AI-driven algorithms demonstrate remarkable potential in addressing the challenge of overlooked polyps, particularly polyp subtypes infamous for escaping human detection because of their inconspicuous appearance. Secondly, AI empowers gastroenterologists without exhaustive training in advanced mucosal imaging to characterise polyps with accuracy similar to that of expert interventionalists, reducing the dependence on pathologic evaluation and guiding appropriate resection techniques or referrals for more complex resections. AI in colonoscopy holds the potential to advance the detection and characterisation of polyps, addressing current limitations and improving patient outcomes. The integration of AI technologies into routine colonoscopy represents a promising step towards more effective colorectal cancer screening and prevention.
Collapse
Affiliation(s)
- Edward Young
- Faculty of Health and Medical Sciences, University of Adelaide, Lyell McEwin Hospital, Haydown Rd, Elizabeth Vale, SA 5112, Australia
| | - Louisa Edwards
- Faculty of Health and Medical Sciences, University of Adelaide, Queen Elizabeth Hospital, Port Rd, Woodville South, SA 5011, Australia
| | - Rajvinder Singh
- Faculty of Health and Medical Sciences, University of Adelaide, Lyell McEwin Hospital, Haydown Rd, Elizabeth Vale, SA 5112, Australia
| |
Collapse
|
17
|
Keshtkar K, Reza Safarpour A, Heshmat R, Sotoudehmanesh R, Keshtkar A. A Systematic Review and Meta-analysis of Convolutional Neural Network in the Diagnosis of Colorectal Polyps and Cancer. THE TURKISH JOURNAL OF GASTROENTEROLOGY : THE OFFICIAL JOURNAL OF TURKISH SOCIETY OF GASTROENTEROLOGY 2023; 34:985-997. [PMID: 37681266 PMCID: PMC10645297 DOI: 10.5152/tjg.2023.22491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 03/22/2023] [Indexed: 09/09/2023]
Abstract
Convolutional neural networks are a class of deep neural networks used for different clinical purposes, including improving the detection rate of colorectal lesions. This systematic review and meta-analysis aimed to assess the performance of convolutional neural network-based models in the detection or classification of colorectal polyps and colorectal cancer. A systematic search was performed in MEDLINE, SCOPUS, Web of Science, and other related databases. The performance measures of the convolutional neural network models in the detection of colorectal polyps and colorectal cancer were calculated in the 2 scenarios of the best and worst accuracy. Stata and R software were used for conducting the meta-analysis. From 3368 searched records, 24 primary studies were included. The sensitivity and specificity of convolutional neural network models in predicting colorectal polyps in worst and best scenarios ranged from 84.7% to 91.6% and from 86.0% to 93.8%, respectively. These values in predicting colorectal cancer varied between 93.2% and 94.1% and between 94.6% and 97.7%. The positive and negative likelihood ratios varied between 6.2 and 14.5 and 0.09 and 0.17 in these scenarios, respectively, in predicting colorectal polyps, and 17.1-41.2 and 0.07-0.06 in predicting colorectal polyps. The diagnostic odds ratio and accuracy measures of convolutional neural network models in predicting colorectal polyps in worst and best scenarios ranged between 36% and 162% and between 80.5% and 88.6%, respectively. These values in predicting colorectal cancer in the worst and the best scenarios varied between 239.63% and 677.47% and between 88.2% and 96.4%. The area under the receiver operating characteristic varied between 0.92 and 0.97 in the worst and the best scenarios in colorectal polyps, respectively, and between 0.98 and 0.99 in colorectal polyps prediction. Convolutional neural network-based models showed an acceptable accuracy in detecting colorectal polyps and colorectal cancer.
Collapse
Affiliation(s)
- Kamyab Keshtkar
- University of Tehran School of Electrical and Computer Engineering, Tehran, Iran
| | - Ali Reza Safarpour
- Gastroenterohepatology Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Ramin Heshmat
- Chronic Diseases Research Center, Endocrinology and Metabolism Population Sciences Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Rasoul Sotoudehmanesh
- Department of Gastroenterology, Digestive Disease Research Center, Digestive Disease Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Abbas Keshtkar
- Department of Health Sciences Education Development, Tehran University of Medical Sciences School of Public Health, Tehran, Iran
| |
Collapse
|
18
|
Shakir T, Kader R, Bhan C, Chand M. AI in colonoscopy - detection and characterisation of malignant polyps. ARTIFICIAL INTELLIGENCE SURGERY 2023:186-94. [DOI: 10.20517/ais.2023.17] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2024]
Abstract
The medical technological revolution has transformed the nature with which we deliver care. Adjuncts such as artificial intelligence and machine learning have underpinned this. The applications to the field of endoscopy are numerous. Malignant polyps represent a significant diagnostic dilemma as they lie in an area in which mischaracterisation may mean the difference between an endoscopic procedure and a formal bowel resection. This has implications for patients’ oncological outcomes, morbidity and mortality, especially if post-procedure histopathology upstages disease. We have made significant strides with the applications of artificial intelligence to colonoscopic detection. Deep learning algorithms are able to be created from video and image databases. These have been applied to traditional, human-derived, classification methods, such as Paris or Kudo, with up to 93% accuracy. Furthermore, multimodal characterisation systems have been developed, which also factor in patient demographics and colonic location to provide an estimation of invasion and endoscopic resectability with over 90% accuracy. Although the technology is still evolving, and the lack of high-quality randomised controlled trials limits clinical usability, there is an exciting horizon upon us for artificial intelligence-augmented endoscopy.
Collapse
|
19
|
Kim J, Kim H, Yoon YS, Kim CW, Hong SM, Kim S, Choi D, Chun J, Hong SW, Hwang SW, Park SH, Yang DH, Ye BD, Byeon JS, Yang SK, Kim SY, Myung SJ. Investigation of artificial intelligence integrated fluorescence endoscopy image analysis with indocyanine green for interpretation of precancerous lesions in colon cancer. PLoS One 2023; 18:e0286189. [PMID: 37228164 DOI: 10.1371/journal.pone.0286189] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 05/11/2023] [Indexed: 05/27/2023] Open
Abstract
Indocyanine green (ICG) has been used in clinical practice for more than 40 years and its safety and preferential accumulation in tumors has been reported for various tumor types, including colon cancer. However, reports on clinical assessments of ICG-based molecular endoscopy imaging for precancerous lesions are scarce. We determined visualization ability of ICG fluorescence endoscopy in colitis-associated colon cancer using 30 lesions from an azoxymethane/dextran sulfate sodium (AOM/DSS) mouse model and 16 colon cancer patient tissue-samples. With a total of 60 images (optical, fluorescence) obtained during endoscopy observation of mouse colon cancer, we used deep learning network to predict four classes (Normal, Dysplasia, Adenoma, and Carcinoma) of colorectal cancer development. ICG could detect 100% of carcinoma, 90% of adenoma, and 57% of dysplasia, with little background signal at 30 min after injection via real-time fluorescence endoscopy. Correlation analysis with immunohistochemistry revealed a positive correlation of ICG with inducible nitric oxide synthase (iNOS; r > 0.5). Increased expression of iNOS resulted in increased levels of cellular nitric oxide in cancer cells compared to that in normal cells, which was related to the inhibition of drug efflux via the ABCB1 transporter down-regulation resulting in delayed retention of intracellular ICG. With artificial intelligence training, the accuracy of image classification into four classes using data sets, such as fluorescence, optical, and fluorescence/optical images was assessed. Fluorescence images obtained the highest accuracy (AUC of 0.8125) than optical and fluorescence/optical images (AUC of 0.75 and 0.6667, respectively). These findings highlight the clinical feasibility of ICG as a detector of precancerous lesions in real-time fluorescence endoscopy with artificial intelligence training and suggest that the mechanism of ICG retention in cancer cells is related to intracellular nitric oxide concentration.
Collapse
Affiliation(s)
- Jinhyeon Kim
- Digestive Diseases Research Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Hajung Kim
- Convergence Medicine Research Center, Asan Medical Center, Seoul, Republic of Korea
| | - Yong Sik Yoon
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Chan Wook Kim
- Department of Colon and Rectal Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung-Mo Hong
- Digestive Diseases Research Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sungjee Kim
- Department of Chemistry and School of Interdisciplinary Bioscience and Bioengineering, Pohang University of Science & Technology, Pohang, Gyeongbuk, Republic of Korea
| | - Doowon Choi
- School of Interdisciplinary Bioscience and Bioengineering, Pohang University of Science & Technology, Pohang, Gyeongbuk, Republic of Korea
| | - Jihyun Chun
- Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung Wook Hong
- Digestive Diseases Research Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sung Wook Hwang
- Digestive Diseases Research Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Hyoung Park
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Dong-Hoon Yang
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Byong Duk Ye
- Digestive Diseases Research Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jeong-Sik Byeon
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Suk-Kyun Yang
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sun Young Kim
- Asan Institute for Life Sciences, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung-Jae Myung
- Digestive Diseases Research Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Edis Biotech, Songpa-gu, Seoul, Republic of Korea
| |
Collapse
|
20
|
Sharma A, Kumar R, Yadav G, Garg P. Artificial intelligence in intestinal polyp and colorectal cancer prediction. Cancer Lett 2023; 565:216238. [PMID: 37211068 DOI: 10.1016/j.canlet.2023.216238] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 05/17/2023] [Accepted: 05/17/2023] [Indexed: 05/23/2023]
Abstract
Artificial intelligence (AI) algorithms and their application to disease detection and decision support for healthcare professions have greatly evolved in the recent decade. AI has been widely applied and explored in gastroenterology for endoscopic analysis to diagnose intestinal cancers, premalignant polyps, gastrointestinal inflammatory lesions, and bleeding. Patients' responses to treatments and prognoses have both been predicted using AI by combining multiple algorithms. In this review, we explored the recent applications of AI algorithms in the identification and characterization of intestinal polyps and colorectal cancer predictions. AI-based prediction models have the potential to help medical practitioners diagnose, establish prognoses, and find accurate conclusions for the treatment of patients. With the understanding that rigorous validation of AI approaches using randomized controlled studies is solicited before widespread clinical use by health authorities, the article also discusses the limitations and challenges associated with deploying AI systems to diagnose intestinal malignancies and premalignant lesions.
Collapse
Affiliation(s)
- Anju Sharma
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S.A.S Nagar, 160062, Punjab, India
| | - Rajnish Kumar
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Uttar Pradesh, 226010, India; Department of Veterinary Medicine and Surgery, College of Veterinary Medicine, University of Missouri, Columbia, MO, USA
| | - Garima Yadav
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Uttar Pradesh, 226010, India
| | - Prabha Garg
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S.A.S Nagar, 160062, Punjab, India.
| |
Collapse
|
21
|
Krenzer A, Heil S, Fitting D, Matti S, Zoller WG, Hann A, Puppe F. Automated classification of polyps using deep learning architectures and few-shot learning. BMC Med Imaging 2023; 23:59. [PMID: 37081495 PMCID: PMC10120204 DOI: 10.1186/s12880-023-01007-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 03/24/2023] [Indexed: 04/22/2023] Open
Abstract
BACKGROUND Colorectal cancer is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is a colonoscopy. However, not all colon polyps have the risk of becoming cancerous. Therefore, polyps are classified using different classification systems. After the classification, further treatment and procedures are based on the classification of the polyp. Nevertheless, classification is not easy. Therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the NICE and Paris classification. METHODS We build two classification systems. One is classifying polyps based on their shape (Paris). The other classifies polyps based on their texture and surface patterns (NICE). A two-step process for the Paris classification is introduced: First, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. For the NICE classification, we design a few-shot learning algorithm based on the Deep Metric Learning approach. The algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of NICE annotated images in our database. RESULTS For the Paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. For the NICE classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. Additionally, we show different ablations of the algorithms. Finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations. CONCLUSION Overall we introduce two polyp classification systems to assist gastroenterologists. We achieve state-of-the-art performance in the Paris classification and demonstrate the viability of the few-shot learning paradigm in the NICE classification, addressing the prevalent data scarcity issues faced in medical machine learning.
Collapse
Affiliation(s)
- Adrian Krenzer
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany.
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany.
| | - Stefan Heil
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany
| | - Daniel Fitting
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany
| | - Safa Matti
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany
| | - Wolfram G Zoller
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstrasse 60, 70174, Stuttgart, Germany
| | - Alexander Hann
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany
| | - Frank Puppe
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany
| |
Collapse
|
22
|
Shen MH, Huang CC, Chen YT, Tsai YJ, Liou FM, Chang SC, Phan NN. Deep Learning Empowers Endoscopic Detection and Polyps Classification: A Multiple-Hospital Study. Diagnostics (Basel) 2023; 13:diagnostics13081473. [PMID: 37189575 DOI: 10.3390/diagnostics13081473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 04/03/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
The present study aimed to develop an AI-based system for the detection and classification of polyps using colonoscopy images. A total of about 256,220 colonoscopy images from 5000 colorectal cancer patients were collected and processed. We used the CNN model for polyp detection and the EfficientNet-b0 model for polyp classification. Data were partitioned into training, validation and testing sets, with a 70%, 15% and 15% ratio, respectively. After the model was trained/validated/tested, to evaluate its performance rigorously, we conducted a further external validation using both prospective (n = 150) and retrospective (n = 385) approaches for data collection from 3 hospitals. The deep learning model performance with the testing set reached a state-of-the-art sensitivity and specificity of 0.9709 (95% CI: 0.9646-0.9757) and 0.9701 (95% CI: 0.9663-0.9749), respectively, for polyp detection. The polyp classification model attained an AUC of 0.9989 (95% CI: 0.9954-1.00). The external validation from 3 hospital results achieved 0.9516 (95% CI: 0.9295-0.9670) with the lesion-based sensitivity and a frame-based specificity of 0.9720 (95% CI: 0.9713-0.9726) for polyp detection. The model achieved an AUC of 0.9521 (95% CI: 0.9308-0.9734) for polyp classification. The high-performance, deep-learning-based system could be used in clinical practice to facilitate rapid, efficient and reliable decisions by physicians and endoscopists.
Collapse
Affiliation(s)
- Ming-Hung Shen
- Department of Surgery, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City 24205, Taiwan
- School of Medicine, College of Medicine, Fu Jen Catholic University, New Taipei City 24205, Taiwan
| | - Chi-Cheng Huang
- Department of Surgery, Taipei Veterans General Hospital, Taipei City 11217, Taiwan
- Institute of Epidemiology and Preventive Medicine, College of Public Health, National Taiwan University, Taipei City 10663, Taiwan
| | - Yu-Tsung Chen
- Department of Internal Medicine, Fu Jen Catholic University Hospital, New Taipei City 24205, Taiwan
| | - Yi-Jian Tsai
- Division of Colorectal Surgery, Department of Surgery, Fu Jen Catholic University Hospital, New Taipei City 24205, Taiwan
- Graduate Institute of Biomedical Electronics and Bioinformatics, Department of Electrical Engineering, National Taiwan University, Taipei City 10663, Taiwan
| | | | - Shih-Chang Chang
- Division of Colorectal Surgery, Department of Surgery, Cathay General Hospital, Taipei City 106443, Taiwan
| | - Nam Nhut Phan
- Bioinformatics and Biostatistics Core, Centre of Genomic and Precision Medicine, National Taiwan University, Taipei City 10055, Taiwan
| |
Collapse
|
23
|
Gimeno-García AZ, Hernández-Pérez A, Nicolás-Pérez D, Hernández-Guerra M. Artificial Intelligence Applied to Colonoscopy: Is It Time to Take a Step Forward? Cancers (Basel) 2023; 15:cancers15082193. [PMID: 37190122 DOI: 10.3390/cancers15082193] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 04/04/2023] [Accepted: 04/05/2023] [Indexed: 05/17/2023] Open
Abstract
Growing evidence indicates that artificial intelligence (AI) applied to medicine is here to stay. In gastroenterology, AI computer vision applications have been stated as a research priority. The two main AI system categories are computer-aided polyp detection (CADe) and computer-assisted diagnosis (CADx). However, other fields of expansion are those related to colonoscopy quality, such as methods to objectively assess colon cleansing during the colonoscopy, as well as devices to automatically predict and improve bowel cleansing before the examination, predict deep submucosal invasion, obtain a reliable measurement of colorectal polyps and accurately locate colorectal lesions in the colon. Although growing evidence indicates that AI systems could improve some of these quality metrics, there are concerns regarding cost-effectiveness, and large and multicentric randomized studies with strong outcomes, such as post-colonoscopy colorectal cancer incidence and mortality, are lacking. The integration of all these tasks into one quality-improvement device could facilitate the incorporation of AI systems in clinical practice. In this manuscript, the current status of the role of AI in colonoscopy is reviewed, as well as its current applications, drawbacks and areas for improvement.
Collapse
Affiliation(s)
- Antonio Z Gimeno-García
- Gastroenterology Department, Hospital Universitario de Canarias, 38200 San Cristóbal de La Laguna, Tenerife, Spain
- Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Tenerife, Spain
| | - Anjara Hernández-Pérez
- Gastroenterology Department, Hospital Universitario de Canarias, 38200 San Cristóbal de La Laguna, Tenerife, Spain
- Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Tenerife, Spain
| | - David Nicolás-Pérez
- Gastroenterology Department, Hospital Universitario de Canarias, 38200 San Cristóbal de La Laguna, Tenerife, Spain
- Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Tenerife, Spain
| | - Manuel Hernández-Guerra
- Gastroenterology Department, Hospital Universitario de Canarias, 38200 San Cristóbal de La Laguna, Tenerife, Spain
- Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Tenerife, Spain
| |
Collapse
|
24
|
Mazumdar S, Sinha S, Jha S, Jagtap B. Computer-aided automated diminutive colonic polyp detection in colonoscopy by using deep machine learning system; first indigenous algorithm developed in India. Indian J Gastroenterol 2023; 42:226-232. [PMID: 37145230 DOI: 10.1007/s12664-022-01331-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 12/18/2022] [Indexed: 05/06/2023]
Abstract
BACKGROUND Colonic polyps can be detected and resected during a colonoscopy before cancer development. However, about 1/4th of the polyps could be missed due to their small size, location or human errors. An artificial intelligence (AI) system can improve polyp detection and reduce colorectal cancer incidence. We are developing an indigenous AI system to detect diminutive polyps in real-life scenarios that can be compatible with any high-definition colonoscopy and endoscopic video- capture software. METHODS We trained a masked region-based convolutional neural network model to detect and localize colonic polyps. Three independent datasets of colonoscopy videos comprising 1,039 image frames were used and divided into a training dataset of 688 frames and a testing dataset of 351 frames. Of 1,039 image frames, 231 were from real-life colonoscopy videos from our centre. The rest were from publicly available image frames already modified to be directly utilizable for developing the AI system. The image frames of the testing dataset were also augmented by rotating and zooming the images to replicate real-life distortions of images seen during colonoscopy. The AI system was trained to localize the polyp by creating a 'bounding box'. It was then applied to the testing dataset to test its accuracy in detecting polyps automatically. RESULTS The AI system achieved a mean average precision (equivalent to specificity) of 88.63% for automatic polyp detection. All polyps in the testing were identified by AI, i.e., no false-negative result in the testing dataset (sensitivity of 100%). The mean polyp size in the study was 5 (± 4) mm. The mean processing time per image frame was 96.4 minutes. CONCLUSIONS This AI system, when applied to real-life colonoscopy images, having wide variations in bowel preparation and small polyp size, can detect colonic polyps with a high degree of accuracy.
Collapse
Affiliation(s)
- Srijan Mazumdar
- Indian Institute of Liver and Digestive Sciences, Sitala (East), Jagadishpur, Sonarpur, 24 Parganas (South), Kolkata, 700 150, India.
| | - Saugata Sinha
- Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440 010, India
| | - Saurabh Jha
- Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440 010, India
| | - Balaji Jagtap
- Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440 010, India
| |
Collapse
|
25
|
Dhaliwal J, Walsh CM. Artificial Intelligence in Pediatric Endoscopy: Current Status and Future Applications. Gastrointest Endosc Clin N Am 2023; 33:291-308. [PMID: 36948747 DOI: 10.1016/j.giec.2022.12.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2023]
Abstract
The application of artificial intelligence (AI) has great promise for improving pediatric endoscopy. The majority of preclinical studies have been undertaken in adults, with the greatest progress being made in the context of colorectal cancer screening and surveillance. This development has only been possible with advances in deep learning, like the convolutional neural network model, which has enabled real-time detection of pathology. Comparatively, the majority of deep learning systems developed in inflammatory bowel disease have focused on predicting disease severity and were developed using still images rather than videos. The application of AI to pediatric endoscopy is in its infancy, thus providing an opportunity to develop clinically meaningful and fair systems that do not perpetuate societal biases. In this review, we provide an overview of AI, summarize the advances of AI in endoscopy, and describe its potential application to pediatric endoscopic practice and education.
Collapse
Affiliation(s)
- Jasbir Dhaliwal
- Division of Pediatric Gastroenterology, Hepatology and Nutrition, Cincinnati Children's Hospital Medictal Center, University of Cincinnati, OH, USA.
| | - Catharine M Walsh
- Division of Gastroenterology, Hepatology, and Nutrition, and the SickKids Research and Learning Institutes, The Hospital for Sick Children, Toronto, ON, Canada; Department of Paediatrics and The Wilson Centre, University of Toronto, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
26
|
Gan P, Li P, Xia H, Zhou X, Tang X. The application of artificial intelligence in improving colonoscopic adenoma detection rate: Where are we and where are we going. GASTROENTEROLOGIA Y HEPATOLOGIA 2023; 46:203-213. [PMID: 35489584 DOI: 10.1016/j.gastrohep.2022.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Revised: 03/08/2022] [Accepted: 03/18/2022] [Indexed: 02/08/2023]
Abstract
Colorectal cancer (CRC) is one of the common malignant tumors in the world. Colonoscopy is the crucial examination technique in CRC screening programs for the early detection of precursor lesions, and treatment of early colorectal cancer, which can reduce the morbidity and mortality of CRC significantly. However, pooled polyp miss rates during colonoscopic examination are as high as 22%. Artificial intelligence (AI) provides a promising way to improve the colonoscopic adenoma detection rate (ADR). It might assist endoscopists in avoiding missing polyps and offer an accurate optical diagnosis of suspected lesions. Herein, we described some of the milestone studies in using AI for colonoscopy, and the future application directions of AI in improving colonoscopic ADR.
Collapse
Affiliation(s)
- Peiling Gan
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Peiling Li
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Huifang Xia
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xian Zhou
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xiaowei Tang
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China; Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, China.
| |
Collapse
|
27
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
28
|
González-Bueno Puyal J, Brandao P, Ahmad OF, Bhatia KK, Toth D, Kader R, Lovat L, Mountney P, Stoyanov D. Spatio-temporal classification for polyp diagnosis. BIOMEDICAL OPTICS EXPRESS 2023; 14:593-607. [PMID: 36874484 PMCID: PMC9979670 DOI: 10.1364/boe.473446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 11/25/2022] [Accepted: 12/06/2022] [Indexed: 06/18/2023]
Abstract
Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-cancerous polyps. Computer-aided polyp characterisation can determine which polyps need polypectomy and recent deep learning-based approaches have shown promising results as clinical decision support tools. Yet polyp appearance during a procedure can vary, making automatic predictions unstable. In this paper, we investigate the use of spatio-temporal information to improve the performance of lesions classification as adenoma or non-adenoma. Two methods are implemented showing an increase in performance and robustness during extensive experiments both on internal and openly available benchmark datasets.
Collapse
Affiliation(s)
- Juana González-Bueno Puyal
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
- Odin Vision, London W1W 7TY, UK
| | | | - Omer F. Ahmad
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | | | | | - Rawen Kader
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | - Laurence Lovat
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| |
Collapse
|
29
|
A Deep-Learning Approach for Identifying and Classifying Digestive Diseases. Symmetry (Basel) 2023. [DOI: 10.3390/sym15020379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
The digestive tract, often known as the gastrointestinal (GI) tract or the gastrointestinal system, is affected by digestive ailments. The stomach, large and small intestines, liver, pancreas and gallbladder are all components of the digestive tract. A digestive disease is any illness that affects the digestive system. Serious to moderate conditions can exist. Heartburn, cancer, irritable bowel syndrome (IBS) and lactose intolerance are only a few of the frequent issues. The digestive system may be treated with many different surgical treatments. Laparoscopy, open surgery and endoscopy are a few examples of these techniques. This paper proposes transfer-learning models with different pre-trained models to identify and classify digestive diseases. The proposed systems showed an increase in metrics, such as the accuracy, precision and recall, when compared with other state-of-the-art methods, and EfficientNetB0 achieved the best performance results of 98.01% accuracy, 98% precision and 98% recall.
Collapse
|
30
|
Mansur A, Saleem Z, Elhakim T, Daye D. Role of artificial intelligence in risk prediction, prognostication, and therapy response assessment in colorectal cancer: current state and future directions. Front Oncol 2023; 13:1065402. [PMID: 36761957 PMCID: PMC9905815 DOI: 10.3389/fonc.2023.1065402] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
Artificial Intelligence (AI) is a branch of computer science that utilizes optimization, probabilistic and statistical approaches to analyze and make predictions based on a vast amount of data. In recent years, AI has revolutionized the field of oncology and spearheaded novel approaches in the management of various cancers, including colorectal cancer (CRC). Notably, the applications of AI to diagnose, prognosticate, and predict response to therapy in CRC, is gaining traction and proving to be promising. There have also been several advancements in AI technologies to help predict metastases in CRC and in Computer-Aided Detection (CAD) Systems to improve miss rates for colorectal neoplasia. This article provides a comprehensive review of the role of AI in predicting risk, prognosis, and response to therapies among patients with CRC.
Collapse
Affiliation(s)
- Arian Mansur
- Harvard Medical School, Boston, MA, United States
| | | | - Tarig Elhakim
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| |
Collapse
|
31
|
A Novel Computer-Aided Detection/Diagnosis System for Detection and Classification of Polyps in Colonoscopy. Diagnostics (Basel) 2023; 13:diagnostics13020170. [PMID: 36672980 PMCID: PMC9857872 DOI: 10.3390/diagnostics13020170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/16/2022] [Accepted: 12/20/2022] [Indexed: 01/05/2023] Open
Abstract
Using a deep learning algorithm in the development of a computer-aided system for colon polyp detection is effective in reducing the miss rate. This study aimed to develop a system for colon polyp detection and classification. We used a data augmentation technique and conditional GAN to generate polyp images for YOLO training to improve the polyp detection ability. After testing the model five times, a model with 300 GANs (GAN 300) achieved the highest average precision (AP) of 54.60% for SSA and 75.41% for TA. These results were better than those of the data augmentation method, which showed AP of 53.56% for SSA and 72.55% for TA. The AP, mAP, and IoU for the 300 GAN model for the HP were 80.97%, 70.07%, and 57.24%, and the data increased in comparison with the data augmentation technique by 76.98%, 67.70%, and 55.26%, respectively. We also used Gaussian blurring to simulate the blurred images during colonoscopy and then applied DeblurGAN-v2 to deblur the images. Further, we trained the dataset using YOLO to classify polyps. After using DeblurGAN-v2, the mAP increased from 25.64% to 30.74%. This method effectively improved the accuracy of polyp detection and classification.
Collapse
|
32
|
Artificial intelligence-assisted optical diagnosis for the resect-and-discard strategy in clinical practice: the Artificial intelligence BLI Characterization (ABC) study. Endoscopy 2023; 55:14-22. [PMID: 35562098 DOI: 10.1055/a-1852-0330] [Citation(s) in RCA: 53] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
BACKGROUND Optical diagnosis of colonic polyps is poorly reproducible outside of high volume referral centers. The present study aimed to assess whether real-time artificial intelligence (AI)-assisted optical diagnosis is accurate enough to implement the leave-in-situ strategy for diminutive (≤ 5 mm) rectosigmoid polyps (DRSPs). METHODS Consecutive colonoscopy outpatients with ≥ 1 DRSP were included. DRSPs were categorized as adenomas or nonadenomas by the endoscopists, who had differing expertise in optical diagnosis, with the assistance of a real-time AI system (CAD-EYE). The primary end point was ≥ 90 % negative predictive value (NPV) for adenomatous histology in high confidence AI-assisted optical diagnosis of DRSPs (Preservation and Incorporation of Valuable endoscopic Innovations [PIVI-1] threshold), with histopathology as the reference standard. The agreement between optical- and histology-based post-polypectomy surveillance intervals (≥ 90 %; PIVI-2 threshold) was also calculated according to European Society of Gastrointestinal Endoscopy (ESGE) and United States Multi-Society Task Force (USMSTF) guidelines. RESULTS Overall 596 DRSPs were retrieved for histology in 389 patients; an AI-assisted high confidence optical diagnosis was made in 92.3 %. The NPV of AI-assisted optical diagnosis for DRSPs (PIVI-1) was 91.0 % (95 %CI 87.1 %-93.9 %). The PIVI-2 threshold was met with 97.4 % (95 %CI 95.7 %-98.9 %) and 92.6 % (95 %CI 90.0 %-95.2 %) of patients according to ESGE and USMSTF, respectively. AI-assisted optical diagnosis accuracy was significantly lower for nonexperts (82.3 %, 95 %CI 76.4 %-87.3 %) than for experts (91.9 %, 95 %CI 88.5 %-94.5 %); however, nonexperts quickly approached the performance levels of experts over time. CONCLUSION AI-assisted optical diagnosis matches the required PIVI thresholds. This does not however offset the need for endoscopists' high level confidence and expertise. The AI system seems to be useful, especially for nonexperts.
Collapse
|
33
|
Young EJ, Rajandran A, Philpott HL, Sathananthan D, Hoile SF, Singh R. Mucosal imaging in colon polyps: New advances and what the future may hold. World J Gastroenterol 2022; 28:6632-6661. [PMID: 36620337 PMCID: PMC9813932 DOI: 10.3748/wjg.v28.i47.6632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Revised: 10/23/2022] [Accepted: 11/23/2022] [Indexed: 12/19/2022] Open
Abstract
An expanding range of advanced mucosal imaging technologies have been developed with the goal of improving the detection and characterization of lesions in the gastrointestinal tract. Many technologies have targeted colorectal neoplasia given the potential for intervention prior to the development of invasive cancer in the setting of widespread surveillance programs. Improvement in adenoma detection reduces miss rates and prevents interval cancer development. Advanced imaging technologies aim to enhance detection without significantly increasing procedural time. Accurate polyp characterisation guides resection techniques for larger polyps, as well as providing the platform for the “resect and discard” and “do not resect” strategies for small and diminutive polyps. This review aims to collate and summarise the evidence regarding these technologies to guide colonoscopic practice in both interventional and non-interventional endoscopists.
Collapse
Affiliation(s)
- Edward John Young
- Department of Gastroenterology, Lyell McEwin Hospital, Northern Adelaide Local Health Network, Elizabeth Vale 5031, South Australia, Australia
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide 5000, South Australia, Australia
| | - Arvinf Rajandran
- Department of Gastroenterology, Lyell McEwin Hospital, Northern Adelaide Local Health Network, Elizabeth Vale 5031, South Australia, Australia
| | - Hamish Lachlan Philpott
- Department of Gastroenterology, Lyell McEwin Hospital, Northern Adelaide Local Health Network, Elizabeth Vale 5031, South Australia, Australia
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide 5000, South Australia, Australia
| | - Dharshan Sathananthan
- Department of Gastroenterology, Lyell McEwin Hospital, Northern Adelaide Local Health Network, Elizabeth Vale 5031, South Australia, Australia
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide 5000, South Australia, Australia
| | - Sophie Fenella Hoile
- Department of Gastroenterology, Lyell McEwin Hospital, Northern Adelaide Local Health Network, Elizabeth Vale 5031, South Australia, Australia
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide 5000, South Australia, Australia
| | - Rajvinder Singh
- Department of Gastroenterology, Lyell McEwin Hospital, Northern Adelaide Local Health Network, Elizabeth Vale 5031, South Australia, Australia
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide 5000, South Australia, Australia
| |
Collapse
|
34
|
Ali S. Where do we stand in AI for endoscopic image analysis? Deciphering gaps and future directions. NPJ Digit Med 2022; 5:184. [PMID: 36539473 PMCID: PMC9767933 DOI: 10.1038/s41746-022-00733-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
Recent developments in deep learning have enabled data-driven algorithms that can reach human-level performance and beyond. The development and deployment of medical image analysis methods have several challenges, including data heterogeneity due to population diversity and different device manufacturers. In addition, more input from experts is required for a reliable method development process. While the exponential growth in clinical imaging data has enabled deep learning to flourish, data heterogeneity, multi-modality, and rare or inconspicuous disease cases still need to be explored. Endoscopy being highly operator-dependent with grim clinical outcomes in some disease cases, reliable and accurate automated system guidance can improve patient care. Most designed methods must be more generalisable to the unseen target data, patient population variability, and variable disease appearances. The paper reviews recent works on endoscopic image analysis with artificial intelligence (AI) and emphasises the current unmatched needs in this field. Finally, it outlines the future directions for clinically relevant complex AI solutions to improve patient outcomes.
Collapse
Affiliation(s)
- Sharib Ali
- School of Computing, University of Leeds, LS2 9JT, Leeds, UK.
| |
Collapse
|
35
|
Tharwat M, Sakr NA, El-Sappagh S, Soliman H, Kwak KS, Elmogy M. Colon Cancer Diagnosis Based on Machine Learning and Deep Learning: Modalities and Analysis Techniques. SENSORS (BASEL, SWITZERLAND) 2022; 22:9250. [PMID: 36501951 PMCID: PMC9739266 DOI: 10.3390/s22239250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
The treatment and diagnosis of colon cancer are considered to be social and economic challenges due to the high mortality rates. Every year, around the world, almost half a million people contract cancer, including colon cancer. Determining the grade of colon cancer mainly depends on analyzing the gland's structure by tissue region, which has led to the existence of various tests for screening that can be utilized to investigate polyp images and colorectal cancer. This article presents a comprehensive survey on the diagnosis of colon cancer. This covers many aspects related to colon cancer, such as its symptoms and grades as well as the available imaging modalities (particularly, histopathology images used for analysis) in addition to common diagnosis systems. Furthermore, the most widely used datasets and performance evaluation metrics are discussed. We provide a comprehensive review of the current studies on colon cancer, classified into deep-learning (DL) and machine-learning (ML) techniques, and we identify their main strengths and limitations. These techniques provide extensive support for identifying the early stages of cancer that lead to early treatment of the disease and produce a lower mortality rate compared with the rate produced after symptoms develop. In addition, these methods can help to prevent colorectal cancer from progressing through the removal of pre-malignant polyps, which can be achieved using screening tests to make the disease easier to diagnose. Finally, the existing challenges and future research directions that open the way for future work in this field are presented.
Collapse
Affiliation(s)
- Mai Tharwat
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Nehal A. Sakr
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Shaker El-Sappagh
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13512, Egypt
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| | - Hassan Soliman
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Kyung-Sup Kwak
- Department of Information and Communication Engineering, Inha University, Incheon 22212, Republic of Korea
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| |
Collapse
|
36
|
Parkash O, Siddiqui ATS, Jiwani U, Rind F, Padhani ZA, Rizvi A, Hoodbhoy Z, Das JK. Diagnostic accuracy of artificial intelligence for detecting gastrointestinal luminal pathologies: A systematic review and meta-analysis. Front Med (Lausanne) 2022; 9:1018937. [PMID: 36405592 PMCID: PMC9672666 DOI: 10.3389/fmed.2022.1018937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/03/2022] [Indexed: 11/06/2022] Open
Abstract
Background Artificial Intelligence (AI) holds considerable promise for diagnostics in the field of gastroenterology. This systematic review and meta-analysis aims to assess the diagnostic accuracy of AI models compared with the gold standard of experts and histopathology for the diagnosis of various gastrointestinal (GI) luminal pathologies including polyps, neoplasms, and inflammatory bowel disease. Methods We searched PubMed, CINAHL, Wiley Cochrane Library, and Web of Science electronic databases to identify studies assessing the diagnostic performance of AI models for GI luminal pathologies. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. We performed a meta-analysis and hierarchical summary receiver operating characteristic curves (HSROC). The risk of bias was assessed using Quality Assessment for Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Subgroup analyses were conducted based on the type of GI luminal disease, AI model, reference standard, and type of data used for analysis. This study is registered with PROSPERO (CRD42021288360). Findings We included 73 studies, of which 31 were externally validated and provided sufficient information for inclusion in the meta-analysis. The overall sensitivity of AI for detecting GI luminal pathologies was 91.9% (95% CI: 89.0–94.1) and specificity was 91.7% (95% CI: 87.4–94.7). Deep learning models (sensitivity: 89.8%, specificity: 91.9%) and ensemble methods (sensitivity: 95.4%, specificity: 90.9%) were the most commonly used models in the included studies. Majority of studies (n = 56, 76.7%) had a high risk of selection bias while 74% (n = 54) studies were low risk on reference standard and 67% (n = 49) were low risk for flow and timing bias. Interpretation The review suggests high sensitivity and specificity of AI models for the detection of GI luminal pathologies. There is a need for large, multi-center trials in both high income countries and low- and middle- income countries to assess the performance of these AI models in real clinical settings and its impact on diagnosis and prognosis. Systematic review registration [https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=288360], identifier [CRD42021288360].
Collapse
Affiliation(s)
- Om Parkash
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | | | - Uswa Jiwani
- Center of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Fahad Rind
- Head and Neck Oncology, The Ohio State University, Columbus, OH, United States
| | - Zahra Ali Padhani
- Institute for Global Health and Development, Aga Khan University, Karachi, Pakistan
| | - Arjumand Rizvi
- Center of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Zahra Hoodbhoy
- Department of Pediatrics and Child Health, Aga Khan University, Karachi, Pakistan
| | - Jai K. Das
- Institute for Global Health and Development, Aga Khan University, Karachi, Pakistan
- Department of Pediatrics and Child Health, Aga Khan University, Karachi, Pakistan
- *Correspondence: Jai K. Das,
| |
Collapse
|
37
|
Chen HY, Ge P, Liu JY, Qu JL, Bao F, Xu CM, Chen HL, Shang D, Zhang GX. Artificial intelligence: Emerging player in the diagnosis and treatment of digestive disease. World J Gastroenterol 2022; 28:2152-2162. [PMID: 35721881 PMCID: PMC9157617 DOI: 10.3748/wjg.v28.i20.2152] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 11/24/2021] [Accepted: 04/24/2022] [Indexed: 02/06/2023] Open
Abstract
Given the breakthroughs in key technologies, such as image recognition, deep learning and neural networks, artificial intelligence (AI) continues to be increasingly developed, leading to closer and deeper integration with an increasingly data-, knowledge- and brain labor-intensive medical industry. As society continues to advance and individuals become more aware of their health needs, the problems associated with the aging of the population are receiving increasing attention, and there is an urgent demand for improving medical technology, prolonging human life and enhancing health. Digestive system diseases are the most common clinical diseases and are characterized by complex clinical manifestations and a general lack of obvious symptoms in the early stage. Such diseases are very difficult to diagnose and treat. In recent years, the incidence of diseases of the digestive system has increased. As AI applications in the field of health care continue to be developed, AI has begun playing an important role in the diagnosis and treatment of diseases of the digestive system. In this paper, the application of AI in assisted diagnosis and the application and prospects of AI in malignant and benign digestive system diseases are reviewed.
Collapse
Affiliation(s)
- Hai-Yang Chen
- Laboratory of Integrative Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Department of General Surgery, Pancreatic-Biliary Center, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
| | - Peng Ge
- Laboratory of Integrative Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Department of General Surgery, Pancreatic-Biliary Center, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
| | - Jia-Yue Liu
- Laboratory of Integrative Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Department of General Surgery, Pancreatic-Biliary Center, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
| | - Jia-Lin Qu
- Laboratory of Integrative Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Institute (College) of Integrative Medicine, Dalian Medical University, Dalian 116044, Liaoning Province, China
| | - Fang Bao
- Laboratory of Integrative Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Department of General Surgery, Pancreatic-Biliary Center, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
| | - Cai-Ming Xu
- Laboratory of Integrative Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Department of General Surgery, Pancreatic-Biliary Center, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Institute (College) of Integrative Medicine, Dalian Medical University, Dalian 116044, Liaoning Province, China
| | - Hai-Long Chen
- Laboratory of Integrative Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Department of General Surgery, Pancreatic-Biliary Center, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Institute (College) of Integrative Medicine, Dalian Medical University, Dalian 116044, Liaoning Province, China
| | - Dong Shang
- Laboratory of Integrative Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Department of General Surgery, Pancreatic-Biliary Center, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Institute (College) of Integrative Medicine, Dalian Medical University, Dalian 116044, Liaoning Province, China
| | - Gui-Xin Zhang
- Laboratory of Integrative Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Department of General Surgery, Pancreatic-Biliary Center, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, Liaoning Province, China
- Institute (College) of Integrative Medicine, Dalian Medical University, Dalian 116044, Liaoning Province, China
| |
Collapse
|
38
|
Fati SM, Senan EM, Azar AT. Hybrid and Deep Learning Approach for Early Diagnosis of Lower Gastrointestinal Diseases. SENSORS (BASEL, SWITZERLAND) 2022; 22:4079. [PMID: 35684696 PMCID: PMC9185306 DOI: 10.3390/s22114079] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 05/21/2022] [Accepted: 05/24/2022] [Indexed: 05/27/2023]
Abstract
Every year, nearly two million people die as a result of gastrointestinal (GI) disorders. Lower gastrointestinal tract tumors are one of the leading causes of death worldwide. Thus, early detection of the type of tumor is of great importance in the survival of patients. Additionally, removing benign tumors in their early stages has more risks than benefits. Video endoscopy technology is essential for imaging the GI tract and identifying disorders such as bleeding, ulcers, polyps, and malignant tumors. Videography generates 5000 frames, which require extensive analysis and take a long time to follow all frames. Thus, artificial intelligence techniques, which have a higher ability to diagnose and assist physicians in making accurate diagnostic decisions, solve these challenges. In this study, many multi-methodologies were developed, where the work was divided into four proposed systems; each system has more than one diagnostic method. The first proposed system utilizes artificial neural networks (ANN) and feed-forward neural networks (FFNN) algorithms based on extracting hybrid features by three algorithms: local binary pattern (LBP), gray level co-occurrence matrix (GLCM), and fuzzy color histogram (FCH) algorithms. The second proposed system uses pre-trained CNN models which are the GoogLeNet and AlexNet based on the extraction of deep feature maps and their classification with high accuracy. The third proposed method uses hybrid techniques consisting of two blocks: the first block of CNN models (GoogLeNet and AlexNet) to extract feature maps; the second block is the support vector machine (SVM) algorithm for classifying deep feature maps. The fourth proposed system uses ANN and FFNN based on the hybrid features between CNN models (GoogLeNet and AlexNet) and LBP, GLCM and FCH algorithms. All the proposed systems achieved superior results in diagnosing endoscopic images for the early detection of lower gastrointestinal diseases. All systems produced promising results; the FFNN classifier based on the hybrid features extracted by GoogLeNet, LBP, GLCM and FCH achieved an accuracy of 99.3%, precision of 99.2%, sensitivity of 99%, specificity of 100%, and AUC of 99.87%.
Collapse
Affiliation(s)
- Suliman Mohamed Fati
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | - Ebrahim Mohammed Senan
- Department of Computer Science & Information Technology, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad 431004, India;
| | - Ahmad Taher Azar
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
- Faculty of Computers and Artificial Intelligence, Benha University, Benha 13518, Egypt
| |
Collapse
|
39
|
A deep ensemble learning method for colorectal polyp classification with optimized network parameters. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03689-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractColorectal Cancer (CRC), a leading cause of cancer-related deaths, can be abated by timely polypectomy. Computer-aided classification of polyps helps endoscopists to resect timely without submitting the sample for histology. Deep learning-based algorithms are promoted for computer-aided colorectal polyp classification. However, the existing methods do not accommodate any information on hyperparametric settings essential for model optimisation. Furthermore, unlike the polyp types, i.e., hyperplastic and adenomatous, the third type, serrated adenoma, is difficult to classify due to its hybrid nature. Moreover, automated assessment of polyps is a challenging task due to the similarities in their patterns; therefore, the strength of individual weak learners is combined to form a weighted ensemble model for an accurate classification model by establishing the optimised hyperparameters. In contrast to existing studies on binary classification, multiclass classification require evaluation through advanced measures. This study compared six existing Convolutional Neural Networks in addition to transfer learning and opted for optimum performing architecture only for ensemble models. The performance evaluation on UCI and PICCOLO dataset of the proposed method in terms of accuracy (96.3%, 81.2%), precision (95.5%, 82.4%), recall (97.2%, 81.1%), F1-score (96.3%, 81.3%) and model reliability using Cohen’s Kappa Coefficient (0.94, 0.62) shows the superiority over existing models. The outcomes of experiments by other studies on the same dataset yielded 82.5% accuracy with 72.7% recall by SVM and 85.9% accuracy with 87.6% recall by other deep learning methods. The proposed method demonstrates that a weighted ensemble of optimised networks along with data augmentation significantly boosts the performance of deep learning-based CAD.
Collapse
|
40
|
Chen S, Urban G, Baldi P. Weakly Supervised Polyp Segmentation in Colonoscopy Images Using Deep Neural Networks. J Imaging 2022; 8:jimaging8050121. [PMID: 35621885 PMCID: PMC9144698 DOI: 10.3390/jimaging8050121] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/15/2022] [Accepted: 04/19/2022] [Indexed: 02/01/2023] Open
Abstract
Colorectal cancer (CRC) is a leading cause of mortality worldwide, and preventive screening modalities such as colonoscopy have been shown to noticeably decrease CRC incidence and mortality. Improving colonoscopy quality remains a challenging task due to limiting factors including the training levels of colonoscopists and the variability in polyp sizes, morphologies, and locations. Deep learning methods have led to state-of-the-art systems for the identification of polyps in colonoscopy videos. In this study, we show that deep learning can also be applied to the segmentation of polyps in real time, and the underlying models can be trained using mostly weakly labeled data, in the form of bounding box annotations that do not contain precise contour information. A novel dataset, Polyp-Box-Seg of 4070 colonoscopy images with polyps from over 2000 patients, is collected, and a subset of 1300 images is manually annotated with segmentation masks. A series of models is trained to evaluate various strategies that utilize bounding box annotations for segmentation tasks. A model trained on the 1300 polyp images with segmentation masks achieves a dice coefficient of 81.52%, which improves significantly to 85.53% when using a weakly supervised strategy leveraging bounding box images. The Polyp-Box-Seg dataset, together with a real-time video demonstration of the segmentation system, are publicly available.
Collapse
Affiliation(s)
- Siwei Chen
- Department of Computer Science, University of California, Irvine, CA 92697, USA; (S.C.); (G.U.)
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA 92697, USA
| | - Gregor Urban
- Department of Computer Science, University of California, Irvine, CA 92697, USA; (S.C.); (G.U.)
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA 92697, USA
| | - Pierre Baldi
- Department of Computer Science, University of California, Irvine, CA 92697, USA; (S.C.); (G.U.)
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA 92697, USA
- Center for Machine Learning and Intelligent Systems, University of California, Irvine, CA 92697, USA
- Correspondence: ; Tel.: +1-949-824-5809
| |
Collapse
|
41
|
Qiu H, Ding S, Liu J, Wang L, Wang X. Applications of Artificial Intelligence in Screening, Diagnosis, Treatment, and Prognosis of Colorectal Cancer. Curr Oncol 2022; 29:1773-1795. [PMID: 35323346 PMCID: PMC8947571 DOI: 10.3390/curroncol29030146] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 02/28/2022] [Accepted: 03/03/2022] [Indexed: 12/29/2022] Open
Abstract
Colorectal cancer (CRC) is one of the most common cancers worldwide. Accurate early detection and diagnosis, comprehensive assessment of treatment response, and precise prediction of prognosis are essential to improve the patients’ survival rate. In recent years, due to the explosion of clinical and omics data, and groundbreaking research in machine learning, artificial intelligence (AI) has shown a great application potential in clinical field of CRC, providing new auxiliary approaches for clinicians to identify high-risk patients, select precise and personalized treatment plans, as well as to predict prognoses. This review comprehensively analyzes and summarizes the research progress and clinical application value of AI technologies in CRC screening, diagnosis, treatment, and prognosis, demonstrating the current status of the AI in the main clinical stages. The limitations, challenges, and future perspectives in the clinical implementation of AI are also discussed.
Collapse
Affiliation(s)
- Hang Qiu
- Big Data Research Center, University of Electronic Science and Technology of China, Chengdu 611731, China;
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
- Correspondence: (H.Q.); (X.W.)
| | - Shuhan Ding
- School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853, USA;
| | - Jianbo Liu
- West China School of Medicine, Sichuan University, Chengdu 610041, China;
- Department of Gastrointestinal Surgery, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Liya Wang
- Big Data Research Center, University of Electronic Science and Technology of China, Chengdu 611731, China;
| | - Xiaodong Wang
- West China School of Medicine, Sichuan University, Chengdu 610041, China;
- Department of Gastrointestinal Surgery, West China Hospital, Sichuan University, Chengdu 610041, China
- Correspondence: (H.Q.); (X.W.)
| |
Collapse
|
42
|
Classification of the Confocal Microscopy Images of Colorectal Tumor and Inflammatory Colitis Mucosa Tissue Using Deep Learning. Diagnostics (Basel) 2022; 12:diagnostics12020288. [PMID: 35204379 PMCID: PMC8870781 DOI: 10.3390/diagnostics12020288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/21/2022] [Accepted: 01/21/2022] [Indexed: 12/09/2022] Open
Abstract
Confocal microscopy image analysis is a useful method for neoplasm diagnosis. Many ambiguous cases are difficult to distinguish with the naked eye, thus leading to high inter-observer variability and significant time investments for learning this method. We aimed to develop a deep learning-based neoplasm classification model that classifies confocal microscopy images of 10× magnified colon tissues into three classes: neoplasm, inflammation, and normal tissue. ResNet50 with data augmentation and transfer learning approaches was used to efficiently train the model with limited training data. A class activation map was generated by using global average pooling to confirm which areas had a major effect on the classification. The proposed method achieved an accuracy of 81%, which was 14.05% more accurate than three machine learning-based methods and 22.6% better than the predictions made by four endoscopists. ResNet50 with data augmentation and transfer learning can be utilized to effectively identify neoplasm, inflammation, and normal tissue in confocal microscopy images. The proposed method outperformed three machine learning-based methods and identified the area that had a major influence on the results. Inter-observer variability and the time required for learning can be reduced if the proposed model is used with confocal microscopy image analysis for diagnosis.
Collapse
|
43
|
Yamada M, Shino R, Kondo H, Yamada S, Takamaru H, Sakamoto T, Bhandari P, Imaoka H, Kuchiba A, Shibata T, Saito Y, Hamamoto R. Robust automated prediction of the revised Vienna Classification in colonoscopy using deep learning: development and initial external validation. J Gastroenterol 2022; 57:879-889. [PMID: 35972582 PMCID: PMC9596523 DOI: 10.1007/s00535-022-01908-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 07/21/2022] [Indexed: 02/04/2023]
Abstract
BACKGROUND Improved optical diagnostic technology is needed that can be used by also outside expert centers. Hence, we developed an artificial intelligence (AI) system that automatically and robustly predicts the pathological diagnosis based on the revised Vienna Classification using standard colonoscopy images. METHODS We prepared deep learning algorithms and colonoscopy images containing pathologically proven lesions (56,872 images, 6775 lesions). Four classifications were adopted: revised Vienna Classification category 1, 3, and 4/5 and normal images. The best algorithm-ResNet152-in the independent internal validation (14,048 images, 1718 lesions) was used for external validation (255 images, 128 lesions) based on neoplastic and non-neoplastic classification. Diagnostic performance of endoscopists was compared using a computer-assisted interpreting test. RESULTS In the internal validation, the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for adenoma (category 3) of 84.6% (95% CI 83.5-85.6%), 99.7% (99.5-99.8%), 90.8% (89.9-91.7%), 89.2% (88.5-99.0%), and 89.8% (89.3-90.4%), respectively. In the external validation, ResNet152's sensitivity, specificity, PPV, NPV, and accuracy for neoplastic lesions were 88.3% (82.6-94.1%), 90.3% (83.0-97.7%), 94.6% (90.5-98.8%), 80.0% (70.6-89.4%), and 89.0% (84.5-93.6%), respectively. This diagnostic performance was superior to that of expert endoscopists. Area under the receiver-operating characteristic curve was 0.903 (0.860-0.946). CONCLUSIONS The developed AI system can help non-expert endoscopists make differential diagnoses of colorectal neoplasia on par with expert endoscopists during colonoscopy. (229/250 words).
Collapse
Affiliation(s)
- Masayoshi Yamada
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, Japan ,Division of Medical AI Research and Development, National Cancer Center Research Institute, Tokyo, Japan
| | - Ryosaku Shino
- Biometrics Research Laboratories, NEC Corporation, Kawasaki, Kanagawa Japan
| | - Hiroko Kondo
- Division of Medical AI Research and Development, National Cancer Center Research Institute, Tokyo, Japan ,RIKEN Center for Advanced Intelligence Project, Cancer Translational Research Team, Tokyo, Japan
| | - Shigemi Yamada
- Division of Medical AI Research and Development, National Cancer Center Research Institute, Tokyo, Japan ,RIKEN Center for Advanced Intelligence Project, Cancer Translational Research Team, Tokyo, Japan
| | - Hiroyuki Takamaru
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, Japan
| | - Taku Sakamoto
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, Japan
| | - Pradeep Bhandari
- Department of Gastroenterology, Portsmouth Hospitals University NHS Trust, Portsmouth, UK
| | - Hitoshi Imaoka
- Biometrics Research Laboratories, NEC Corporation, Kawasaki, Kanagawa Japan
| | - Aya Kuchiba
- Biostatistics Division, National Cancer Center, Tokyo, Japan
| | - Taro Shibata
- Biostatistics Division, National Cancer Center, Tokyo, Japan
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, Japan
| | - Ryuji Hamamoto
- Division of Medical AI Research and Development, National Cancer Center Research Institute, Tokyo, Japan ,RIKEN Center for Advanced Intelligence Project, Cancer Translational Research Team, Tokyo, Japan
| |
Collapse
|
44
|
Taghiakbari M, Mori Y, von Renteln D. Artificial intelligence-assisted colonoscopy: A review of current state of practice and research. World J Gastroenterol 2021; 27:8103-8122. [PMID: 35068857 PMCID: PMC8704267 DOI: 10.3748/wjg.v27.i47.8103] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 08/22/2021] [Accepted: 12/08/2021] [Indexed: 02/06/2023] Open
Abstract
Colonoscopy is an effective screening procedure in colorectal cancer prevention programs; however, colonoscopy practice can vary in terms of lesion detection, classification, and removal. Artificial intelligence (AI)-assisted decision support systems for endoscopy is an area of rapid research and development. The systems promise improved detection, classification, screening, and surveillance for colorectal polyps and cancer. Several recently developed applications for AI-assisted colonoscopy have shown promising results for the detection and classification of colorectal polyps and adenomas. However, their value for real-time application in clinical practice has yet to be determined owing to limitations in the design, validation, and testing of AI models under real-life clinical conditions. Despite these current limitations, ambitious attempts to expand the technology further by developing more complex systems capable of assisting and supporting the endoscopist throughout the entire colonoscopy examination, including polypectomy procedures, are at the concept stage. However, further work is required to address the barriers and challenges of AI integration into broader colonoscopy practice, to navigate the approval process from regulatory organizations and societies, and to support physicians and patients on their journey to accepting the technology by providing strong evidence of its accuracy and safety. This article takes a closer look at the current state of AI integration into the field of colonoscopy and offers suggestions for future research.
Collapse
Affiliation(s)
- Mahsa Taghiakbari
- Department of Gastroenterology, CRCHUM, Montreal H2X 0A9, Quebec, Canada
| | - Yuichi Mori
- Clinical Effectiveness Research Group, University of Oslo, Oslo 0450, Norway
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama 224-8503, Japan
| | - Daniel von Renteln
- Department of Gastroenterology, CRCHUM, Montreal H2X 0A9, Quebec, Canada
| |
Collapse
|
45
|
Goyal H, Sherazi SAA, Mann R, Gandhi Z, Perisetti A, Aziz M, Chandan S, Kopel J, Tharian B, Sharma N, Thosani N. Scope of Artificial Intelligence in Gastrointestinal Oncology. Cancers (Basel) 2021; 13:5494. [PMID: 34771658 PMCID: PMC8582733 DOI: 10.3390/cancers13215494] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 10/27/2021] [Indexed: 12/12/2022] Open
Abstract
Gastrointestinal cancers are among the leading causes of death worldwide, with over 2.8 million deaths annually. Over the last few decades, advancements in artificial intelligence technologies have led to their application in medicine. The use of artificial intelligence in endoscopic procedures is a significant breakthrough in modern medicine. Currently, the diagnosis of various gastrointestinal cancer relies on the manual interpretation of radiographic images by radiologists and various endoscopic images by endoscopists. This can lead to diagnostic variabilities as it requires concentration and clinical experience in the field. Artificial intelligence using machine or deep learning algorithms can provide automatic and accurate image analysis and thus assist in diagnosis. In the field of gastroenterology, the application of artificial intelligence can be vast from diagnosis, predicting tumor histology, polyp characterization, metastatic potential, prognosis, and treatment response. It can also provide accurate prediction models to determine the need for intervention with computer-aided diagnosis. The number of research studies on artificial intelligence in gastrointestinal cancer has been increasing rapidly over the last decade due to immense interest in the field. This review aims to review the impact, limitations, and future potentials of artificial intelligence in screening, diagnosis, tumor staging, treatment modalities, and prediction models for the prognosis of various gastrointestinal cancers.
Collapse
Affiliation(s)
- Hemant Goyal
- Department of Internal Medicine, The Wright Center for Graduate Medical Education, 501 S. Washington Avenue, Scranton, PA 18505, USA
| | - Syed A. A. Sherazi
- Department of Medicine, John H Stroger Jr Hospital of Cook County, 1950 W Polk St, Chicago, IL 60612, USA;
| | - Rupinder Mann
- Department of Medicine, Saint Agnes Medical Center, 1303 E. Herndon Ave, Fresno, CA 93720, USA;
| | - Zainab Gandhi
- Department of Medicine, Geisinger Wyoming Valley Medical Center, 1000 E Mountain Dr, Wilkes-Barre, PA 18711, USA;
| | - Abhilash Perisetti
- Division of Interventional Oncology & Surgical Endoscopy (IOSE), Parkview Cancer Institute, 11050 Parkview Circle, Fort Wayne, IN 46845, USA; (A.P.); (N.S.)
| | - Muhammad Aziz
- Department of Gastroenterology and Hepatology, University of Toledo Medical Center, 3000 Arlington Avenue, Toledo, OH 43614, USA;
| | - Saurabh Chandan
- Division of Gastroenterology and Hepatology, CHI Health Creighton University Medical Center, 7500 Mercy Rd, Omaha, NE 68124, USA;
| | - Jonathan Kopel
- Department of Medicine, Texas Tech University Health Sciences Center, 3601 4th St, Lubbock, TX 79430, USA;
| | - Benjamin Tharian
- Department of Gastroenterology and Hepatology, The University of Arkansas for Medical Sciences, 4301 W Markham St, Little Rock, AR 72205, USA;
| | - Neil Sharma
- Division of Interventional Oncology & Surgical Endoscopy (IOSE), Parkview Cancer Institute, 11050 Parkview Circle, Fort Wayne, IN 46845, USA; (A.P.); (N.S.)
| | - Nirav Thosani
- Division of Gastroenterology, Hepatology & Nutrition, McGovern Medical School, UTHealth, 6410 Fannin, St #1014, Houston, TX 77030, USA;
| |
Collapse
|
46
|
Christou CD, Tsoulfas G. Challenges and opportunities in the application of artificial intelligence in gastroenterology and hepatology. World J Gastroenterol 2021; 27:6191-6223. [PMID: 34712027 PMCID: PMC8515803 DOI: 10.3748/wjg.v27.i37.6191] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 05/06/2021] [Accepted: 08/31/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is an umbrella term used to describe a cluster of interrelated fields. Machine learning (ML) refers to a model that learns from past data to predict future data. Medicine and particularly gastroenterology and hepatology, are data-rich fields with extensive data repositories, and therefore fruitful ground for AI/ML-based software applications. In this study, we comprehensively review the current applications of AI/ML-based models in these fields and the opportunities that arise from their application. Specifically, we refer to the applications of AI/ML-based models in prevention, diagnosis, management, and prognosis of gastrointestinal bleeding, inflammatory bowel diseases, gastrointestinal premalignant and malignant lesions, other nonmalignant gastrointestinal lesions and diseases, hepatitis B and C infection, chronic liver diseases, hepatocellular carcinoma, cholangiocarcinoma, and primary sclerosing cholangitis. At the same time, we identify the major challenges that restrain the widespread use of these models in healthcare in an effort to explore ways to overcome them. Notably, we elaborate on the concerns regarding intrinsic biases, data protection, cybersecurity, intellectual property, liability, ethical challenges, and transparency. Even at a slower pace than anticipated, AI is infiltrating the healthcare industry. AI in healthcare will become a reality, and every physician will have to engage with it by necessity.
Collapse
Affiliation(s)
- Chrysanthos D Christou
- Organ Transplant Unit, Hippokration General Hospital, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece
| | - Georgios Tsoulfas
- Organ Transplant Unit, Hippokration General Hospital, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece
| |
Collapse
|
47
|
Nogueira-Rodríguez A, Domínguez-Carbajales R, Campos-Tato F, Herrero J, Puga M, Remedios D, Rivas L, Sánchez E, Iglesias Á, Cubiella J, Fdez-Riverola F, López-Fernández H, Reboiro-Jato M, Glez-Peña D. Real-time polyp detection model using convolutional neural networks. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06496-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
AbstractColorectal cancer is a major health problem, where advances towards computer-aided diagnosis (CAD) systems to assist the endoscopist can be a promising path to improvement. Here, a deep learning model for real-time polyp detection based on a pre-trained YOLOv3 (You Only Look Once) architecture and complemented with a post-processing step based on an object-tracking algorithm to reduce false positives is reported. The base YOLOv3 network was fine-tuned using a dataset composed of 28,576 images labelled with locations of 941 polyps that will be made public soon. In a frame-based evaluation using isolated images containing polyps, a general F1 score of 0.88 was achieved (recall = 0.87, precision = 0.89), with lower predictive performance in flat polyps, but higher for sessile, and pedunculated morphologies, as well as with the usage of narrow band imaging, whereas polyp size < 5 mm does not seem to have significant impact. In a polyp-based evaluation using polyp and normal mucosa videos, with a positive criterion defined as the presence of at least one 50-frames-length (window size) segment with a ratio of 75% of frames with predicted bounding boxes (frames positivity), 72.61% of sensitivity (95% CI 68.99–75.95) and 83.04% of specificity (95% CI 76.70–87.92) were achieved (Youden = 0.55, diagnostic odds ratio (DOR) = 12.98). When the positive criterion is less stringent (window size = 25, frames positivity = 50%), sensitivity reaches around 90% (sensitivity = 89.91%, 95% CI 87.20–91.94; specificity = 54.97%, 95% CI 47.49–62.24; Youden = 0.45; DOR = 10.76). The object-tracking algorithm has demonstrated a significant improvement in specificity whereas maintaining sensitivity, as well as a marginal impact on computational performance. These results suggest that the model could be effectively integrated into a CAD system.
Collapse
|
48
|
Kader R, Hadjinicolaou AV, Georgiades F, Stoyanov D, Lovat LB. Optical diagnosis of colorectal polyps using convolutional neural networks. World J Gastroenterol 2021; 27:5908-5918. [PMID: 34629808 PMCID: PMC8475008 DOI: 10.3748/wjg.v27.i35.5908] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 04/29/2021] [Accepted: 08/24/2021] [Indexed: 02/06/2023] Open
Abstract
Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-malignant and neoplastic polyps. Although technologies for image-enhanced endoscopy are widely available, optical diagnosis has not been incorporated into routine clinical practice, mainly due to significant inter-operator variability. In recent years, there has been a growing number of studies demonstrating the potential of convolutional neural networks (CNN) to enhance optical diagnosis of polyps. Data suggest that the use of CNNs might mitigate the inter-operator variability amongst endoscopists, potentially enabling a “resect and discard“ or ”leave in“ strategy to be adopted in real-time. This would have significant financial benefits for healthcare systems, avoid unnecessary polypectomies of non-neoplastic polyps and improve the efficiency of colonoscopy. Here, we review advances in CNN for the optical diagnosis of colorectal polyps, current limitations and future directions.
Collapse
Affiliation(s)
- Rawen Kader
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, United Kingdom
- Division of Surgery and Interventional Sciences, University College London, London W1W 7TY, United Kingdom
| | - Andreas V Hadjinicolaou
- MRC Cancer Unit, Department of Gastroenterology, University of Cambridge, Cambridge CB2 0QQ, United Kingdom
| | - Fanourios Georgiades
- Department of Surgery, University of Cambridge, Cambridge CB2 0QQ, United Kingdom
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, United Kingdom
- Department of Computer Science, University College London, London W1W 7TY, United Kingdom
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, United Kingdom
- Division of Surgery and Interventional Sciences, University College London, London W1W 7TY, United Kingdom
| |
Collapse
|
49
|
Joseph J, LePage EM, Cheney CP, Pawa R. Artificial intelligence in colonoscopy. World J Gastroenterol 2021; 27:4802-4817. [PMID: 34447227 PMCID: PMC8371500 DOI: 10.3748/wjg.v27.i29.4802] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 05/12/2021] [Accepted: 07/16/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer remains a leading cause of morbidity and mortality in the United States. Advances in artificial intelligence (AI), specifically computer aided detection and computer-aided diagnosis offer promising methods of increasing adenoma detection rates with the goal of removing more pre-cancerous polyps. Conversely, these methods also may allow for smaller non-cancerous lesions to be diagnosed in vivo and left in place, decreasing the risks that come with unnecessary polypectomies. This review will provide an overview of current advances in the use of AI in colonoscopy to aid in polyp detection and characterization as well as areas of developing research.
Collapse
Affiliation(s)
- Joel Joseph
- Department of Internal Medicine, Wake Forest Baptist Medical Center, Winston Salem, NC 27157, United States
| | - Ella Marie LePage
- Department of Internal Medicine, Wake Forest Baptist Medical Center, Winston Salem, NC 27157, United States
| | - Catherine Phillips Cheney
- Department of Internal Medicine, Wake Forest School of Medicine, Winston Salem, NC 27157, United States
| | - Rishi Pawa
- Department of Internal Medicine, Section of Gastroenterology and Hepatology, Wake Forest Baptist Medical Center, Winston-Salem, NC 27157, United States
| |
Collapse
|
50
|
Yamada A, Niikura R, Otani K, Aoki T, Koike K. Automatic detection of colorectal neoplasia in wireless colon capsule endoscopic images using a deep convolutional neural network. Endoscopy 2021; 53:832-836. [PMID: 32947623 DOI: 10.1055/a-1266-1066] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
BACKGROUND Although colorectal neoplasms are the most common abnormalities found in colon capsule endoscopy (CCE), no computer-aided detection method is yet available. We developed an artificial intelligence (AI) system that uses deep learning to automatically detect such lesions in CCE images. METHODS We trained a deep convolutional neural network system based on a Single Shot MultiBox Detector using 15 933 CCE images of colorectal neoplasms, such as polyps and cancers. We assessed performance by calculating areas under the receiver operating characteristic curves, along with sensitivities, specificities, and accuracies, using an independent test set of 4784 images, including 1850 images of colorectal neoplasms and 2934 normal colon images. RESULTS The area under the curve for detection of colorectal neoplasia by the AI model was 0.902. The sensitivity, specificity, and accuracy were 79.0 %, 87.0 %, and 83.9 %, respectively, at a probability cutoff of 0.348. CONCLUSIONS We developed and validated a new AI-based system that automatically detects colorectal neoplasms in CCE images.
Collapse
Affiliation(s)
- Atsuo Yamada
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryota Niikura
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Keita Otani
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Tomonori Aoki
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kazuhiko Koike
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|