1
|
Taghiakbari M, Hamidi Ghalehjegh S, Jehanno E, Berthier T, di Jorio L, Ghadakzadeh S, Barkun A, Takla M, Bouin M, Deslandres E, Bouchard S, Sidani S, Bengio Y, von Renteln D. Automated Detection of Anatomical Landmarks During Colonoscopy Using a Deep Learning Model. J Can Assoc Gastroenterol 2023; 6:145-151. [PMID: 37538187 PMCID: PMC10395661 DOI: 10.1093/jcag/gwad017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 08/05/2023] Open
Abstract
Background and aims Identification and photo-documentation of the ileocecal valve (ICV) and appendiceal orifice (AO) confirm completeness of colonoscopy examinations. We aimed to develop and test a deep convolutional neural network (DCNN) model that can automatically identify ICV and AO, and differentiate these landmarks from normal mucosa and colorectal polyps. Methods We prospectively collected annotated full-length colonoscopy videos of 318 patients undergoing outpatient colonoscopies. We created three nonoverlapping training, validation, and test data sets with 25,444 unaltered frames extracted from the colonoscopy videos showing four landmarks/image classes (AO, ICV, normal mucosa, and polyps). A DCNN classification model was developed, validated, and tested in separate data sets of images containing the four different landmarks. Results After training and validation, the DCNN model could identify both AO and ICV in 18 out of 21 patients (85.7%). The accuracy of the model for differentiating AO from normal mucosa, and ICV from normal mucosa were 86.4% (95% CI 84.1% to 88.5%), and 86.4% (95% CI 84.1% to 88.6%), respectively. Furthermore, the accuracy of the model for differentiating polyps from normal mucosa was 88.6% (95% CI 86.6% to 90.3%). Conclusion This model offers a novel tool to assist endoscopists with automated identification of AO and ICV during colonoscopy. The model can reliably distinguish these anatomical landmarks from normal mucosa and colorectal polyps. It can be implemented into automated colonoscopy report generation, photo-documentation, and quality auditing solutions to improve colonoscopy reporting quality.
Collapse
Affiliation(s)
- Mahsa Taghiakbari
- Faculty of Medicine, Department of Biomedical Sciences, University of Montreal, Montreal, Quebec, Canada
- Department of Medicine, Division of Gastroenterology, University of Montreal Hospital Research Center (CRCHUM), Montreal, Quebec, Canada
| | | | - Emmanuel Jehanno
- Department of Artificial Intelligence, Imagia Canexia Health Inc., Montreal, Canada
| | - Tess Berthier
- Department of Artificial Intelligence, Imagia Canexia Health Inc., Montreal, Canada
| | - Lisa di Jorio
- Department of Artificial Intelligence, Imagia Canexia Health Inc., Montreal, Canada
| | - Saber Ghadakzadeh
- Department of Artificial Intelligence, Imagia Canexia Health Inc., Montreal, Canada
| | - Alan Barkun
- Division of Gastroenterology, McGill University Health Center, McGill University, Montreal, Quebec, Canada
| | - Mark Takla
- Faculty of Medicine, Department of Biomedical Sciences, University of Montreal, Montreal, Quebec, Canada
- Department of Medicine, Division of Gastroenterology, University of Montreal Hospital Research Center (CRCHUM), Montreal, Quebec, Canada
| | - Mickael Bouin
- Department of Medicine, Division of Gastroenterology, University of Montreal Hospital Research Center (CRCHUM), Montreal, Quebec, Canada
- Division of Gastroenterology, University of Montreal Hospital Center (CHUM), Montreal, Quebec, Canada
| | - Eric Deslandres
- Division of Gastroenterology, University of Montreal Hospital Center (CHUM), Montreal, Quebec, Canada
| | - Simon Bouchard
- Division of Gastroenterology, University of Montreal Hospital Center (CHUM), Montreal, Quebec, Canada
| | - Sacha Sidani
- Division of Gastroenterology, University of Montreal Hospital Center (CHUM), Montreal, Quebec, Canada
| | - Yoshua Bengio
- Faculty of Medicine, Department of Biomedical Sciences, University of Montreal, Montreal, Quebec, Canada
| | - Daniel von Renteln
- Department of Medicine, Division of Gastroenterology, University of Montreal Hospital Research Center (CRCHUM), Montreal, Quebec, Canada
- Division of Gastroenterology, University of Montreal Hospital Center (CHUM), Montreal, Quebec, Canada
| |
Collapse
|
2
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
3
|
Low DJ, Hong Z, Jugnundan S, Mukherjee A, Grover SC. Automated Detection of Bowel Preparation Scoring and Adequacy With Deep Convolutional Neural Networks. J Can Assoc Gastroenterol 2022; 5:256-260. [PMID: 36467599 PMCID: PMC9713630 DOI: 10.1093/jcag/gwac013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 08/10/2023] Open
Abstract
INTRODUCTION Adequate bowel preparation is integral to effective colonoscopy. Inadequate bowel preparation has been associated with reduced adenoma detection rate and increased post-colonoscopy colorectal cancer (PCCRC). As a result, the USMSTF recommends early interval reevaluation for colonoscopies with inadequate bowel preparation. However, bowel preparation documentation is highly variable with subjective interpretation. In this study, we developed deep convolutional neural networks (DCNN) to objectively ascertain bowel preparation. METHODS Bowel preparation scores were assigned using the Boston Bowel Preparation Scale (BBPS). Bowel preparation adequacy and inadequacy were defined as BBPS ≥2 and BBPS <2, respectively. A total of 38523 images were extracted from 28 colonoscopy videos and split into 26966 images for training, 7704 for validation, and 3853 for testing. Two DCNNs were created using a Densenet-169 backbone in PyTorch library evaluating BBPS score and bowel preparation adequacy. We used Adam optimiser with an initial learning rate of 3 × 10-4 and a scheduler to decay the learning rate of each parameter group by 0.1 every 7 epochs along with focal loss as our criterion for both classifiers. RESULTS The overall accuracy for BBPS subclassification and determination of adequacy was 91% and 98%, respectively. The accuracy for BBPS 0, BBPS 1, BBPS 2, and BBPS 3 was 84%, 91%, 85%, and 96%, respectively. CONCLUSION We developed DCCNs capable of assessing bowel preparation adequacy and scoring with a high degree of accuracy. However, this algorithm will require further research to assess its efficacy in real-time colonoscopy.
Collapse
Affiliation(s)
- Daniel J Low
- St. Michael’s Hospital, Toronto, ON M5B 1W8, Canada
| | - Zhuoqiao Hong
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | | | | | - Samir C Grover
- Correspondence: Samir Grover, MD, MEd, St. Michael’s Hospital, 30 Bond Street, Toronto, ON M5B 1W8, Canada, e-mail:
| |
Collapse
|