1
|
van Bokhorst QNE, Houwen BBSL, Hazewinkel Y, Fockens P, Dekker E. Advances in artificial intelligence and computer science for computer-aided diagnosis of colorectal polyps: current status. Endosc Int Open 2023; 11:E752-E767. [PMID: 37593158 PMCID: PMC10431975 DOI: 10.1055/a-2098-1999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 05/08/2023] [Indexed: 08/19/2023] Open
Affiliation(s)
- Querijn N E van Bokhorst
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, location Academic Medical Center, Amsterdam, the Netherlands
- Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam, the Netherlands
| | - Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, location Academic Medical Center, Amsterdam, the Netherlands
- Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Tergooi Medical Center, Hilversum, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, location Academic Medical Center, Amsterdam, the Netherlands
- Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, location Academic Medical Center, Amsterdam, the Netherlands
- Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam, the Netherlands
| |
Collapse
|
2
|
Raju ASN, Venkatesh K. EnsemDeepCADx: Empowering Colorectal Cancer Diagnosis with Mixed-Dataset Features and Ensemble Fusion CNNs on Evidence-Based CKHK-22 Dataset. Bioengineering (Basel) 2023; 10:738. [PMID: 37370669 DOI: 10.3390/bioengineering10060738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/16/2023] [Accepted: 06/18/2023] [Indexed: 06/29/2023] Open
Abstract
Colorectal cancer is associated with a high mortality rate and significant patient risk. Images obtained during a colonoscopy are used to make a diagnosis, highlighting the importance of timely diagnosis and treatment. Using techniques of deep learning could enhance the diagnostic accuracy of existing systems. Using the most advanced deep learning techniques, a brand-new EnsemDeepCADx system for accurate colorectal cancer diagnosis has been developed. The optimal accuracy is achieved by combining Convolutional Neural Networks (CNNs) with transfer learning via bidirectional long short-term memory (BILSTM) and support vector machines (SVM). Four pre-trained CNN models comprise the ADaDR-22, ADaR-22, and DaRD-22 ensemble CNNs: AlexNet, DarkNet-19, DenseNet-201, and ResNet-50. In each of its stages, the CADx system is thoroughly evaluated. From the CKHK-22 mixed dataset, colour, greyscale, and local binary pattern (LBP) image datasets and features are utilised. In the second stage, the returned features are compared to a new feature fusion dataset using three distinct CNN ensembles. Next, they incorporate ensemble CNNs with SVM-based transfer learning by comparing raw features to feature fusion datasets. In the final stage of transfer learning, BILSTM and SVM are combined with a CNN ensemble. The testing accuracy for the ensemble fusion CNN DarD-22 using BILSTM and SVM on the original, grey, LBP, and feature fusion datasets was optimal (95.96%, 88.79%, 73.54%, and 97.89%). Comparing the outputs of all four feature datasets with those of the three ensemble CNNs at each stage enables the EnsemDeepCADx system to attain its highest level of accuracy.
Collapse
Affiliation(s)
- Akella Subrahmanya Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, SRM Nagar, Chennai 603203, India
| | - Kaliyamurthy Venkatesh
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, SRM Nagar, Chennai 603203, India
| |
Collapse
|
3
|
Nakajo K, Ninomiya Y, Kondo H, Takeshita N, Uchida E, Aoyama N, Inaba A, Ikematsu H, Shinozaki T, Matsuura K, Hayashi R, Akimoto T, Yano T. Anatomical classification of pharyngeal and laryngeal endoscopic images using artificial intelligence. Head Neck 2023; 45:1549-1557. [PMID: 37045798 DOI: 10.1002/hed.27370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 02/22/2023] [Accepted: 04/02/2023] [Indexed: 04/14/2023] Open
Abstract
BACKGROUND The entire pharynx should be observed endoscopically to avoid missing pharyngeal lesions. An artificial intelligence (AI) model recognizing anatomical locations can help identify blind spots. We developed and evaluated an AI model classifying pharyngeal and laryngeal endoscopic locations. METHODS The AI model was trained using 5382 endoscopic images, categorized into 15 anatomical locations, and evaluated using an independent dataset of 1110 images. The main outcomes were model accuracy, precision, recall, and F1-score. Moreover, we investigated focused regions in the input images contributing to the model predictions using gradient-weighted class activation mapping (Grad-CAM) and Guided Grad-CAM. RESULTS Our AI model correctly classified pharyngeal and laryngeal images into 15 anatomical locations, with an accuracy of 93.3%. The weighted averages of precision, recall, and F1-score were 0.934, 0.933, and 0.933, respectively. CONCLUSION Our AI model has an excellent performance determining pharyngeal and laryngeal anatomical locations, helping endoscopists notify of blind spots.
Collapse
Affiliation(s)
- Keiichiro Nakajo
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Japan
- Cancer Medicine, Cooperative Graduate School, The Jikei University Graduate School of Medicine, Tokyo, Japan
- Medical Device Innovation Center, National Cancer Center Hospital East, Kashiwa, Japan
| | - Youichi Ninomiya
- Medical Device Innovation Center, National Cancer Center Hospital East, Kashiwa, Japan
| | - Hibiki Kondo
- Medical Device Innovation Center, National Cancer Center Hospital East, Kashiwa, Japan
| | - Nobuyoshi Takeshita
- Medical Device Innovation Center, National Cancer Center Hospital East, Kashiwa, Japan
| | - Erika Uchida
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Japan
| | - Naoki Aoyama
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Japan
| | - Atsushi Inaba
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Japan
| | - Hiroaki Ikematsu
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Japan
- Medical Device Innovation Center, National Cancer Center Hospital East, Kashiwa, Japan
| | - Takeshi Shinozaki
- Department of Head and Neck Surgery, National Cancer Center Hospital East, Kashiwa, Japan
| | - Kazuto Matsuura
- Department of Head and Neck Surgery, National Cancer Center Hospital East, Kashiwa, Japan
| | - Ryuichi Hayashi
- Department of Head and Neck Surgery, National Cancer Center Hospital East, Kashiwa, Japan
| | - Tetsuo Akimoto
- Cancer Medicine, Cooperative Graduate School, The Jikei University Graduate School of Medicine, Tokyo, Japan
- Department of Radiation Oncology and Particle Therapy, National Cancer Center Hospital East, Kashiwa, Japan
| | - Tomonori Yano
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Japan
- Medical Device Innovation Center, National Cancer Center Hospital East, Kashiwa, Japan
| |
Collapse
|
4
|
Sedighipour Chafjiri F, Mohebbian MR, Wahid KA, Babyn P. Classification of endoscopic image and video frames using distance metric-based learning with interpolated latent features. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-22. [PMID: 37362715 PMCID: PMC10020761 DOI: 10.1007/s11042-023-14982-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 06/16/2022] [Accepted: 02/22/2023] [Indexed: 06/28/2023]
Abstract
Conventional Endoscopy (CE) and Wireless Capsule Endoscopy (WCE) are well known tools for diagnosing gastrointestinal (GI) tract related disorders. Defining the anatomical location within the GI tract helps clinicians determine appropriate treatment options, which can reduce the need for repetitive endoscopy. Limited research addresses the localization of the anatomical location of WCE and CE images using classification, mainly due to the difficulty in collecting annotated data. In this study, we present a few-shot learning method based on distance metric learning which combines transfer-learning and manifold mixup schemes to localize and classify endoscopic images and video frames. The proposed method allows us to develop a pipeline for endoscopy video sequence localization that can be trained with only a few samples. The use of manifold mixup improves learning by increasing the number of training epochs while reducing overfitting and providing more accurate decision boundaries. A dataset is collected from 10 different anatomical positions of the human GI tract. Two models were trained using only 78 CE and 27 WCE annotated frames to predict the location of 25,700 and 1825 video frames from CE and WCE respectively. We performed subjective evaluation using nine gastroenterologists to validate the need of having such an automated system to localize endoscopic images and video frames. Our method achieved higher accuracy and a higher F1-score when compared with the scores from subjective evaluation. In addition, the results show improved performance with less cross-entropy loss when compared with several existing methods trained on the same datasets. This indicates that the proposed method has the potential to be used in endoscopy image classification. Supplementary Information The online version contains supplementary material available at 10.1007/s11042-023-14982-1.
Collapse
Affiliation(s)
- Fatemeh Sedighipour Chafjiri
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5A9 Canada
| | - Mohammad Reza Mohebbian
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5A9 Canada
| | - Khan A. Wahid
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5A9 Canada
| | - Paul Babyn
- Department of Medical Imaging, University of Saskatchewan and Saskatchewan Health Authority, Saskatoon, SK S7K 0M7 Canada
| |
Collapse
|
5
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
6
|
Zhu JQ, Wang ML, Li Y, Zhang W, Li LJ, Liu L, Zhang Y, Han CJ, Tie CW, Wang SX, Wang GQ, Ni XG. Convolutional neural network based anatomical site identification for laryngoscopy quality control: A multicenter study. Am J Otolaryngol 2023; 44:103695. [PMID: 36473265 DOI: 10.1016/j.amjoto.2022.103695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 09/26/2022] [Accepted: 11/19/2022] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Video laryngoscopy is an important diagnostic tool for head and neck cancers. The artificial intelligence (AI) system has been shown to monitor blind spots during esophagogastroduodenoscopy. This study aimed to test the performance of AI-driven intelligent laryngoscopy monitoring assistant (ILMA) for landmark anatomical sites identification on laryngoscopic images and videos based on a convolutional neural network (CNN). MATERIALS AND METHODS The laryngoscopic images taken from January to December 2018 were retrospectively collected, and ILMA was developed using the CNN model of Inception-ResNet-v2 + Squeeze-and-Excitation Networks (SENet). A total of 16,000 laryngoscopic images were used for training. These were assigned to 20 landmark anatomical sites covering six major head and neck regions. In addition, the performance of ILMA in identifying anatomical sites was validated using 4000 laryngoscopic images and 25 videos provided by five other tertiary hospitals. RESULTS ILMA identified the 20 anatomical sites on the laryngoscopic images with a total accuracy of 97.60 %, and the average sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 100 %, 99.87 %, 97.65 %, and 99.87 %, respectively. In addition, multicenter clinical verification displayed that the accuracy of ILMA in identifying the 20 targeted anatomical sites in 25 laryngoscopic videos from five hospitals was ≥95 %. CONCLUSION The proposed CNN-based ILMA model can rapidly and accurately identify the anatomical sites on laryngoscopic images. The model can reflect the coverage of anatomical regions of the head and neck by laryngoscopy, showing application potential in improving the quality of laryngoscopy.
Collapse
Affiliation(s)
- Ji-Qing Zhu
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mei-Ling Wang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Ying Li
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Wei Zhang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Li-Juan Li
- Department of Otorhinolaryngology, The People's Hospital of Wenshan Prefecture, Wenshan, Yunnan, China
| | - Lin Liu
- Department of Otolaryngology-Head and Neck Surgery, Dalian Municipal Friendship Hospital, Dalian, Liaoning, China
| | - Yan Zhang
- Department of Otorhinolaryngology, Chongqing Traditional Chinese Medicine Hospital, Chongqing, China
| | - Cai-Juan Han
- Department of Otolaryngology-Head and Neck Surgery, Qilu Hospital (Qingdao), Cheeloo College of Medicine, Shandong University, Qingdao, Shandong, China
| | - Cheng-Wei Tie
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shi-Xu Wang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Gui-Qi Wang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| | - Xiao-Guang Ni
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| |
Collapse
|
7
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
8
|
Houwen BBSL, Hartendorp F, Giotis I, Hazewinkel Y, Fockens P, Walstra TR, Dekker E, van Boeckel P, Boparai K, Borg FT, Carballal S, Cazemier M, Daca M, van Eijk B, Jansen J, Koussoulas V, Kuipers T, van Lelyveld N, Ordas I, Marsman W, Moreira L, Muños FR, Noach L, Pellisé M, Ramsoekh D, Schröder R, van Soest E, van Noorden JT, Tytgat K, van Oosterwijk P, van Putten P, Vehmeijer A, Vries RD, van der Vlugt M, Voogd F, van der Zanden E. Computer-aided classification of colorectal segments during colonoscopy: a deep learning approach based on images of a magnetic endoscopic positioning device. Scand J Gastroenterol 2022; 58:649-655. [PMID: 36458659 DOI: 10.1080/00365521.2022.2151320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
OBJECTIVE Assessment of the anatomical colorectal segment of polyps during colonoscopy is important for treatment and follow-up strategies, but is largely operator dependent. This feasibility study aimed to assess whether, using images of a magnetic endoscope imaging (MEI) positioning device, a deep learning approach can be useful to objectively divide the colorectum into anatomical segments. METHODS Models based on the VGG-16 based convolutional neural network architecture were developed to classify the colorectum into anatomical segments. These models were pre-trained on ImageNet data and further trained using prospectively collected data of the POLAR study in which endoscopists were using MEI (3930 still images and 90,151 video frames). Five-fold cross validation with multiple runs was used to evaluate the overall diagnostic accuracies of the models for colorectal segment classification (divided into a 5-class and 2-class colorectal segment division). The colorectal segment assignment by endoscopists was used as the reference standard. RESULTS For the 5-class colorectal segment division, the best performing model correctly classified the colorectal segment in 753 of the 1196 polyps, corresponding to an overall accuracy of 63%, sensitivity of 63%, specificity of 89% and kappa of 0.47. For the 2-class colorectal segment division, 1112 of the 1196 polyps were correctly classified, corresponding to an accuracy of 93%, sensitivity of 93%, specificity of 90% and kappa of 0.82. CONCLUSION The diagnostic performance of a deep learning approach for colorectal segment classification based on images of a MEI device is yet suboptimal (clinicaltrials.gov: NCT03822390).
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Center, Location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Fons Hartendorp
- Department of Computer Science, University of Amsterdam, Amsterdam, the Netherlands
| | - Ioanis Giotis
- ZiuZ Visual Intelligence, Gorredijk, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Medical Center, Radboud University, Nijmegen, The Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Center, Location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Taco R Walstra
- Department of Computer Science, University of Amsterdam, Amsterdam, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Center, Location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands.,Bergman Clinics Maag & Darm Amsterdam, Amsterdam, The Netherlands
| | | | | | - P. van Boeckel
- Department of Gastroenterology and Hepatology, Sint Antonius Ziekenhuis, Nieuwegein, the Netherlands
| | - K. Boparai
- Department of Gastroenterology and Hepatology, Amstelland Hospital, Amstelveen, the Netherlands
| | - F. ter Borg
- Department of Gastroenterology and Hepatology, Deventer Hospital, Deventer, The Netherlands
| | - S. Carballal
- Department of Gastroenterology and Hepatology, Onze Lieve Vrouwe Gasthuis, Amsterdam, the Netherlands
- Department of Gastroenterology, Hospital Clinic of Barcelona, Barcelona, Spain
| | - M. Cazemier
- Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (CIBERehd), Institut d‘Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Universitat de Barcelona, Barcelona, Spain
| | - M. Daca
- Department of Gastroenterology and Hepatology, Onze Lieve Vrouwe Gasthuis, Amsterdam, the Netherlands
- Department of Gastroenterology, Hospital Clinic of Barcelona, Barcelona, Spain
| | - B. van Eijk
- Department of Gastroenterology and Hepatology, Spaarne Ziekenhuis, Hoofddorp, the Netherlands
| | - J.M Jansen
- Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (CIBERehd), Institut d‘Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Universitat de Barcelona, Barcelona, Spain
| | - V. Koussoulas
- Department of Gastroenterology and Hepatology, Nij Smellinghe Hospital, Drachten, The Netherlands
| | - T. Kuipers
- Department of Gastroenterology and Hepatology, Amstelland Hospital, Amstelveen, the Netherlands
| | - N. van Lelyveld
- Department of Gastroenterology and Hepatology, Sint Antonius Ziekenhuis, Nieuwegein, the Netherlands
| | - I. Ordas
- Department of Gastroenterology and Hepatology, Onze Lieve Vrouwe Gasthuis, Amsterdam, the Netherlands
- Department of Gastroenterology, Hospital Clinic of Barcelona, Barcelona, Spain
| | - W. Marsman
- Department of Gastroenterology and Hepatology, Nij Smellinghe Hospital, Drachten, The Netherlands
| | - L. Moreira
- Department of Gastroenterology and Hepatology, Onze Lieve Vrouwe Gasthuis, Amsterdam, the Netherlands
- Department of Gastroenterology, Hospital Clinic of Barcelona, Barcelona, Spain
| | - F.J Rando Muños
- Department of Gastroenterology and Hepatology, Nij Smellinghe Hospital, Drachten, The Netherlands
| | - L. Noach
- Department of Gastroenterology and Hepatology, Amstelland Hospital, Amstelveen, the Netherlands
| | - M. Pellisé
- Department of Gastroenterology and Hepatology, Onze Lieve Vrouwe Gasthuis, Amsterdam, the Netherlands
- Department of Gastroenterology, Hospital Clinic of Barcelona, Barcelona, Spain
| | - D. Ramsoekh
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, the Netherlands
- Bergman Clinics Maag & Darm Amsterdam, Amsterdam, The Netherlands
- Department of Gastroenterology and Hepatology, Amstelland Hospital, Amstelveen, the Netherlands
| | - R. Schröder
- Department of Gastroenterology and Hepatology, Nij Smellinghe Hospital, Drachten, The Netherlands
| | - E.J van Soest
- Department of Gastroenterology and Hepatology, Spaarne Ziekenhuis, Hoofddorp, the Netherlands
| | - J. Tenthof van Noorden
- Department of Gastroenterology and Hepatology, Sint Antonius Ziekenhuis, Nieuwegein, the Netherlands
| | - K.M.A.J Tytgat
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, the Netherlands
- Bergman Clinics Maag & Darm Amsterdam, Amsterdam, The Netherlands
| | - P. van Oosterwijk
- Department of Gastroenterology and Hepatology, Deventer Hospital, Deventer, The Netherlands
| | - P. van Putten
- Department of Gastroenterology and Hepatology, Medical Center Leeuwarden, Leeuwarden, The Netherlands
| | - A. Vehmeijer
- Department of Gastroenterology and Hepatology, Spaarne Ziekenhuis, Hoofddorp, the Netherlands
| | - R. de Vries
- Department of Gastroenterology and Hepatology, Deventer Hospital, Deventer, The Netherlands
| | - M. van der Vlugt
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, the Netherlands
- Bergman Clinics Maag & Darm Amsterdam, Amsterdam, The Netherlands
| | - F. Voogd
- Department of Gastroenterology and Hepatology, Medical Center Leeuwarden, Leeuwarden, The Netherlands
| | - E. van der Zanden
- Department of Gastroenterology and Hepatology, Amstelland Hospital, Amstelveen, the Netherlands
| | | |
Collapse
|
9
|
Narasimha Raju AS, Jayavel K, Rajalakshmi T. ColoRectalCADx: Expeditious Recognition of Colorectal Cancer with Integrated Convolutional Neural Networks and Visual Explanations Using Mixed Dataset Evidence. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8723957. [PMID: 36404909 PMCID: PMC9671728 DOI: 10.1155/2022/8723957] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 10/27/2022] [Indexed: 12/07/2023]
Abstract
Colorectal cancer typically affects the gastrointestinal tract within the human body. Colonoscopy is one of the most accurate methods of detecting cancer. The current system facilitates the identification of cancer by computer-assisted diagnosis (CADx) systems with a limited number of deep learning methods. It does not imply the depiction of mixed datasets for the functioning of the system. The proposed system, called ColoRectalCADx, is supported by deep learning (DL) models suitable for cancer research. The CADx system comprises five stages: convolutional neural networks (CNN), support vector machine (SVM), long short-term memory (LSTM), visual explanation such as gradient-weighted class activation mapping (Grad-CAM), and semantic segmentation phases. Here, the key components of the CADx system are equipped with 9 individual and 12 integrated CNNs, implying that the system consists mainly of investigational experiments with a total of 21 CNNs. In the subsequent phase, the CADx has a combination of CNNs of concatenated transfer learning functions associated with the machine SVM classification. Additional classification is applied to ensure effective transfer of results from CNN to LSTM. The system is mainly made up of a combination of CVC Clinic DB, Kvasir2, and Hyper Kvasir input as a mixed dataset. After CNN and LSTM, in advanced stage, malignancies are detected by using a better polyp recognition technique with Grad-CAM and semantic segmentation using U-Net. CADx results have been stored on Google Cloud for record retention. In these experiments, among all the CNNs, the individual CNN DenseNet-201 (87.1% training and 84.7% testing accuracies) and the integrated CNN ADaDR-22 (84.61% training and 82.17% testing accuracies) were the most efficient for cancer detection with the CNN+LSTM model. ColoRectalCADx accurately identifies cancer through individual CNN DesnseNet-201 and integrated CNN ADaDR-22. In Grad-CAM's visual explanations, CNN DenseNet-201 displays precise visualization of polyps, and CNN U-Net provides precise malignant polyps.
Collapse
Affiliation(s)
- Akella S. Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| | - Kayalvizhi Jayavel
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| | - T. Rajalakshmi
- Department of Electronics and Communication Engineering, School of Electrical and Electronics Engineering, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| |
Collapse
|
10
|
Lu Y, Wu J, Zhuo X, Hu M, Chen Y, Luo Y, Feng Y, Zhi M, Li C, Sun J. Real-Time Artificial Intelligence-Based Histologic Classifications of Colorectal Polyps Using Narrow-Band Imaging. Front Oncol 2022; 12:879239. [PMID: 35619917 PMCID: PMC9128404 DOI: 10.3389/fonc.2022.879239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Accepted: 03/28/2022] [Indexed: 11/28/2022] Open
Abstract
Background and Aims With the development of artificial intelligence (AI), we have become capable of applying real-time computer-aided detection (CAD) in clinical practice. Our aim is to develop an AI-based CAD-N and optimize its diagnostic performance with narrow-band imaging (NBI) images. Methods We developed the CAD-N model with ResNeSt using NBI images for real-time assessment of the histopathology of colorectal polyps (type 1, hyperplastic or inflammatory polyps; type 2, adenomatous polyps, intramucosal cancer, or superficial submucosal invasive cancer; type 3, deep submucosal invasive cancer; and type 4, normal mucosa). We also collected 116 consecutive polyp videos to validate the accuracy of the CAD-N. Results A total of 10,573 images (7,032 images from 650 polyps and 3,541 normal mucous membrane images) from 478 patients were finally chosen for analysis. The sensitivity, specificity, PPV, NPV, and accuracy for each type of the CAD-N in the test set were 89.86%, 97.88%, 93.13%, 96.79%, and 95.93% for type 1; 93.91%, 95.49%, 91.80%, 96.69%, and 94.94% for type 2; 90.21%, 99.29%, 90.21%, 99.29%, and 98.68% for type 3; and 94.86%, 97.28%, 94.73%, 97.35%, and 96.45% for type 4, respectively. The overall accuracy was 93%. We also built models for polyps ≤5 mm, and the sensitivity, specificity, PPV, NPV, and accuracy for them were 96.81%, 94.08%, 95%, 95.97%, and 95.59%, respectively. Video validation results showed that the sensitivity, specificity, and accuracy of the CAD-N were 84.62%, 86.27%, and 85.34%, respectively. Conclusions We have developed real-time AI-based histologic classifications of colorectal polyps using NBI images with good accuracy, which may help in clinical management and documentation of optical histology results.
Collapse
Affiliation(s)
- Yi Lu
- Department of Gastrointestinal Endoscopy, the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases , the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jiachuan Wu
- Digestive Endoscopy Center, Guangdong Second Provincial General Hospital, Guangzhou, China
| | - Xianhua Zhuo
- Department of Gastrointestinal Endoscopy, the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.,Department of Otorhinolaryngology, the Second Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Minhui Hu
- Department of Gastrointestinal Endoscopy, the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases , the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yongpeng Chen
- Department of Gastrointestinal Endoscopy, the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases , the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yuxuan Luo
- Tianjin Economic-Technological Development Area (TEDA) Yujin Digestive Health Industry Research Institute, Tianjin, China
| | - Yue Feng
- Tianjin Economic-Technological Development Area (TEDA) Yujin Digestive Health Industry Research Institute, Tianjin, China
| | - Min Zhi
- Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases , the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.,Department of Gastroenterology, the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Chujun Li
- Department of Gastrointestinal Endoscopy, the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases , the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jiachen Sun
- Department of Gastrointestinal Endoscopy, the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases , the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|