1
|
Jing Y, Li C, Du T, Jiang T, Sun H, Yang J, Shi L, Gao M, Grzegorzek M, Li X. A comprehensive survey of intestine histopathological image analysis using machine vision approaches. Comput Biol Med 2023; 165:107388. [PMID: 37696178 DOI: 10.1016/j.compbiomed.2023.107388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/06/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023]
Abstract
Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.
Collapse
Affiliation(s)
- Yujie Jing
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Minghe Gao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China.
| |
Collapse
|
2
|
Galati JS, Lin K, Gross SA. Recent advances in devices and technologies that might prove revolutionary for colonoscopy procedures. Expert Rev Med Devices 2023; 20:1087-1103. [PMID: 37934873 DOI: 10.1080/17434440.2023.2280773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 11/03/2023] [Indexed: 11/09/2023]
Abstract
INTRODUCTION Colorectal cancer (CRC) is the third most common malignancy and second leading cause of cancer-related mortality in the world. Adenoma detection rate (ADR), a quality indicator for colonoscopy, has gained prominence as it is inversely related to CRC incidence and mortality. As such, recent efforts have focused on developing novel colonoscopy devices and technologies to improve ADR. AREAS COVERED The main objective of this paper is to provide an overview of advancements in the fields of colonoscopy mechanical attachments, artificial intelligence-assisted colonoscopy, and colonoscopy optical enhancements with respect to ADR. We accomplished this by performing a comprehensive search of multiple electronic databases from inception to September 2023. This review is intended to be an introduction to colonoscopy devices and technologies. EXPERT OPINION Numerous mechanical attachments and optical enhancements have been developed that have the potential to improve ADR and AI has gone from being an inaccessible concept to a feasible means for improving ADR. While these advances are exciting and portend a change in what will be considered standard colonoscopy, they continue to require refinement. Future studies should focus on combining modalities to further improve ADR and exploring the use of these technologies in other facets of colonoscopy.
Collapse
Affiliation(s)
- Jonathan S Galati
- Department of Internal Medicine, NYU Langone Health, New York, NY, USA
| | - Kevin Lin
- Department of Internal Medicine, NYU Langone Health, New York, NY, USA
| | - Seth A Gross
- Division of Gastroenterology, NYU Langone Health, New York, NY, USA
| |
Collapse
|
3
|
Ahmad OF, Mazomenos E, Chadebecq F, Kader R, Hussein M, Haidry RJ, Puyal JG, Brandao P, Toth D, Mountney P, Seward E, Vega R, Stoyanov D, Lovat LB. Identifying key mechanisms leading to visual recognition errors for missed colorectal polyps using eye-tracking technology. J Gastroenterol Hepatol 2023; 38:768-774. [PMID: 36652526 PMCID: PMC10601973 DOI: 10.1111/jgh.16127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 01/04/2023] [Accepted: 01/14/2023] [Indexed: 01/20/2023]
Abstract
BACKGROUND AND AIM Lack of visual recognition of colorectal polyps may lead to interval cancers. The mechanisms contributing to perceptual variation, particularly for subtle and advanced colorectal neoplasia, have scarcely been investigated. We aimed to evaluate visual recognition errors and provide novel mechanistic insights. METHODS Eleven participants (seven trainees and four medical students) evaluated images from the UCL polyp perception dataset, containing 25 polyps, using eye-tracking equipment. Gaze errors were defined as those where the lesion was not observed according to eye-tracking technology. Cognitive errors occurred when lesions were observed but not recognized as polyps by participants. A video study was also performed including 39 subtle polyps, where polyp recognition performance was compared with a convolutional neural network. RESULTS Cognitive errors occurred more frequently than gaze errors overall (65.6%), with a significantly higher proportion in trainees (P = 0.0264). In the video validation, the convolutional neural network detected significantly more polyps than trainees and medical students, with per-polyp sensitivities of 79.5%, 30.0%, and 15.4%, respectively. CONCLUSIONS Cognitive errors were the most common reason for visual recognition errors. The impact of interventions such as artificial intelligence, particularly on different types of perceptual errors, needs further investigation including potential effects on learning curves. To facilitate future research, a publicly accessible visual perception colonoscopy polyp database was created.
Collapse
Affiliation(s)
- Omer F Ahmad
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional SciencesUniversity College LondonLondonUK
- Gastrointestinal ServicesUniversity College London HospitalLondonUK
| | - Evangelos Mazomenos
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
| | - Francois Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
| | - Rawen Kader
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional SciencesUniversity College LondonLondonUK
| | - Mohamed Hussein
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional SciencesUniversity College LondonLondonUK
| | - Rehan J Haidry
- Division of Surgery and Interventional SciencesUniversity College LondonLondonUK
- Gastrointestinal ServicesUniversity College London HospitalLondonUK
| | - Juana González‐Bueno Puyal
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Odin Vision LtdLondonUK
| | - Patrick Brandao
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Odin Vision LtdLondonUK
| | | | | | - Ed Seward
- Gastrointestinal ServicesUniversity College London HospitalLondonUK
| | - Roser Vega
- Gastrointestinal ServicesUniversity College London HospitalLondonUK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional SciencesUniversity College LondonLondonUK
- Gastrointestinal ServicesUniversity College London HospitalLondonUK
| |
Collapse
|
4
|
Yin M, Liu L, Gao J, Lin J, Qu S, Xu W, Liu X, Xu C, Zhu J. Deep learning for pancreatic diseases based on endoscopic ultrasound: A systematic review. Int J Med Inform 2023; 174:105044. [PMID: 36948061 DOI: 10.1016/j.ijmedinf.2023.105044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 03/06/2023] [Accepted: 03/10/2023] [Indexed: 03/19/2023]
Abstract
BACKGROUND AND AIMS Endoscopic ultrasonography (EUS) is one of the main examinations in pancreatic diseases. A series of the studies reported the application of deep learning (DL)-assisted EUS in the diagnosis of pancreatic diseases. This systematic review is to evaluate the role of DL algorithms in assisting EUS diagnosis of pancreatic diseases. METHODS Literature search were conducted in PubMed and Semantic Scholar databases. Studies that developed DL models for pancreatic diseases based on EUS were eligible for inclusion. This review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines and quality assessment of the included studies was performed according to the IJMEDI checklist. RESULTS A total of 23 studies were enrolled into this systematic review, which could be categorized into three groups according to computer vision tasks: classification, detection and segmentation. Seventeen studies focused on the classification task, among which five studies developed simple neural network (NN) models while twelve studies constructed convolutional NN (CNN) models. Three studies were concerned the detection task and five studies were the segmentation task, all based on CNN architectures. All models presented in the studies performed well based on EUS images, videos or voice. According to the IJMEDI checklist, six studies were recognized as high-grade quality, with scores beyond 35 points. CONCLUSIONS DL algorithms show great potential in EUS images/videos/voice for pancreatic diseases. However, there is room for improvement such as sample sizes, multi-center cooperation, data preprocessing, model interpretability, and code sharing.
Collapse
Affiliation(s)
- Minyue Yin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou 215000, China; Suzhou Clinical Center of Digestive Diseases, Suzhou 215000, China
| | - Lu Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou 215000, China; Suzhou Clinical Center of Digestive Diseases, Suzhou 215000, China
| | - Jingwen Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou 215000, China; Suzhou Clinical Center of Digestive Diseases, Suzhou 215000, China
| | - Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou 215000, China; Suzhou Clinical Center of Digestive Diseases, Suzhou 215000, China
| | - Shuting Qu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou 215000, China; Suzhou Clinical Center of Digestive Diseases, Suzhou 215000, China
| | - Wei Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou 215000, China; Suzhou Clinical Center of Digestive Diseases, Suzhou 215000, China
| | - Xiaolin Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou 215000, China; Suzhou Clinical Center of Digestive Diseases, Suzhou 215000, China
| | - Chunfang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou 215000, China; Suzhou Clinical Center of Digestive Diseases, Suzhou 215000, China.
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou 215000, China; Suzhou Clinical Center of Digestive Diseases, Suzhou 215000, China.
| |
Collapse
|
5
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
6
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
7
|
Koh FH, Ladlad J, Teo EK, Lin CL, Foo FJ. Real-time artificial intelligence (AI)-aided endoscopy improves adenoma detection rates even in experienced endoscopists: a cohort study in Singapore. Surg Endosc 2023; 37:165-171. [PMID: 35882667 PMCID: PMC9321269 DOI: 10.1007/s00464-022-09470-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 07/10/2022] [Indexed: 01/18/2023]
Abstract
BACKGROUND Colonoscopy is a mainstay to detect premalignant neoplastic lesions in the colon. Real-time Artificial Intelligence (AI)-aided colonoscopy purportedly improves the polyp detection rate, especially for small flat lesions. The aim of this study is to evaluate the performance of real-time AI-aided colonoscopy in the detection of colonic polyps. METHODS A prospective single institution cohort study was conducted in Singapore. All real-time AI-aided colonoscopies, regardless of indication, performed by specialist-grade endoscopists were anonymously recorded from July to September 2021 and reviewed by 2 independent authors (FHK, JL). Sustained detection of an area by the program was regarded as a "hit". Histology for the polypectomies were reviewed to determine adenoma detection rate (ADR). Individual endoscopist's performance with AI were compared against their baseline performance without AI endoscopy. RESULTS A total of 24 (82.8%) endoscopists participated with 18 (62.1%) performing ≥ 5 AI-aided colonoscopies. Of the 18, 72.2% (n = 13) were general surgeons. During that 3-months period, 487 "hits" encountered in 298 colonoscopies. Polypectomies were performed for 51.3% and 68.4% of these polypectomies were adenomas on histology. The post-intervention median ADR was 30.4% was higher than the median baseline polypectomy rate of 24.3% (p = 0.02). Of the adenomas excised, 14 (5.6%) were sessile serrated adenomas. Of those who performed ≥ 5 AI-aided colonoscopies, 13 (72.2%) had an improvement of ADR compared to their polypectomy rate before the introduction of AI, of which 2 of them had significant improvement. CONCLUSIONS Real-time AI-aided colonoscopy have the potential to improved ADR even for experienced endoscopists and would therefore, improve the quality of colonoscopy.
Collapse
Affiliation(s)
- Frederick H. Koh
- grid.508163.90000 0004 7665 4668Colorectal Service, Department of General Surgery, Sengkang General Hospital, SingHealth Services, 110 Sengkang East Way, Singapore, 544886 Singapore
| | - Jasmine Ladlad
- grid.508163.90000 0004 7665 4668Colorectal Service, Department of General Surgery, Sengkang General Hospital, SingHealth Services, 110 Sengkang East Way, Singapore, 544886 Singapore
| | | | - Eng-Kiong Teo
- grid.508163.90000 0004 7665 4668Department of Gastroenterology and Hepatology, Sengkang General Hospital, SingHealth Services, Singapore, Singapore
| | - Cui-Li Lin
- grid.508163.90000 0004 7665 4668Department of Gastroenterology and Hepatology, Sengkang General Hospital, SingHealth Services, Singapore, Singapore
| | - Fung-Joon Foo
- grid.508163.90000 0004 7665 4668Colorectal Service, Department of General Surgery, Sengkang General Hospital, SingHealth Services, 110 Sengkang East Way, Singapore, 544886 Singapore ,grid.508163.90000 0004 7665 4668Endoscopy Centre, Division of Hyperacute Care, Sengkang General Hospital, Singapore, Singapore
| |
Collapse
|
8
|
Galati JS, Duve RJ, O'Mara M, Gross SA. Artificial intelligence in gastroenterology: A narrative review. Artif Intell Gastroenterol 2022; 3:117-141. [DOI: 10.35712/aig.v3.i5.117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/21/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Artificial intelligence (AI) is a complex concept, broadly defined in medicine as the development of computer systems to perform tasks that require human intelligence. It has the capacity to revolutionize medicine by increasing efficiency, expediting data and image analysis and identifying patterns, trends and associations in large datasets. Within gastroenterology, recent research efforts have focused on using AI in esophagogastroduodenoscopy, wireless capsule endoscopy (WCE) and colonoscopy to assist in diagnosis, disease monitoring, lesion detection and therapeutic intervention. The main objective of this narrative review is to provide a comprehensive overview of the research being performed within gastroenterology on AI in esophagogastroduodenoscopy, WCE and colonoscopy.
Collapse
Affiliation(s)
- Jonathan S Galati
- Department of Medicine, NYU Langone Health, New York, NY 10016, United States
| | - Robert J Duve
- Department of Internal Medicine, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY 14203, United States
| | - Matthew O'Mara
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| | - Seth A Gross
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| |
Collapse
|
9
|
Artificial intelligence complemented by water exchange for right-sided colonic polyp detection: It's time to dive! Gastrointest Endosc 2022; 95:1207-1209. [PMID: 35410726 DOI: 10.1016/j.gie.2022.02.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 02/20/2022] [Indexed: 12/11/2022]
|
10
|
Mori Y, Misawa M, Kudo S. Challenges in artificial intelligence for polyp detection. Dig Endosc 2022; 34:870-871. [PMID: 35318734 PMCID: PMC9314935 DOI: 10.1111/den.14279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 02/14/2022] [Accepted: 02/21/2022] [Indexed: 02/08/2023]
Affiliation(s)
- Yuichi Mori
- Clinical Effectiveness Research GroupInstitute of Health and SocietyUniversity of OsloOsloNorway,Section for GastroenterologyDepartment of Transplantation MedicineOslo University HospitalOsloNorway,Digestive Disease CenterShowa University Northern Yokohama HospitalKanagawaJapan
| | - Masashi Misawa
- Digestive Disease CenterShowa University Northern Yokohama HospitalKanagawaJapan
| | - Shin‐ei Kudo
- Digestive Disease CenterShowa University Northern Yokohama HospitalKanagawaJapan
| |
Collapse
|