1
|
Braverman-Jaiven D, Manfredi L. Advancements in the use of AI in the diagnosis and management of inflammatory bowel disease. Front Robot AI 2024; 11:1453194. [PMID: 39498116 PMCID: PMC11532194 DOI: 10.3389/frobt.2024.1453194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2024] [Accepted: 10/07/2024] [Indexed: 11/07/2024] Open
Abstract
Inflammatory bowel disease (IBD) causes chronic inflammation of the colon and digestive tract, and it can be classified as Crohn's disease (CD) and Ulcerative colitis (UC). IBD is more prevalent in Europe and North America, however, since the beginning of the 21st century it has been increasing in South America, Asia, and Africa, leading to its consideration as a worldwide problem. Optical colonoscopy is one of the crucial tests in diagnosing and assessing the progression and prognosis of IBD, as it allows a real-time optical visualization of the colonic wall and ileum and allows for the collection of tissue samples. The accuracy of colonoscopy procedures depends on the expertise and ability of the endoscopists. Therefore, algorithms based on Deep Learning (DL) and Convolutional Neural Networks (CNN) for colonoscopy images and videos are growing in popularity, especially for the detection and classification of colorectal polyps. The performance of this system is dependent on the quality and quantity of the data used for training. There are several datasets publicly available for endoscopy images and videos, but most of them are solely specialized in polyps. The use of DL algorithms to detect IBD is still in its inception, most studies are based on assessing the severity of UC. As artificial intelligence (AI) grows in popularity there is a growing interest in the use of these algorithms for diagnosing and classifying the IBDs and managing their progression. To tackle this, more annotated colonoscopy images and videos will be required for the training of new and more reliable AI algorithms. This article discusses the current challenges in the early detection of IBD, focusing on the available AI algorithms, and databases, and the challenges ahead to improve the detection rate.
Collapse
Affiliation(s)
| | - Luigi Manfredi
- Division of Imaging Science and Technology, School of Medicine, University of Dundee, Dundee, United Kingdom
| |
Collapse
|
2
|
Tudela Y, Majó M, de la Fuente N, Galdran A, Krenzer A, Puppe F, Yamlahi A, Tran TN, Matuszewski BJ, Fitzgerald K, Bian C, Pan J, Liu S, Fernández-Esparrach G, Histace A, Bernal J. A complete benchmark for polyp detection, segmentation and classification in colonoscopy images. Front Oncol 2024; 14:1417862. [PMID: 39381041 PMCID: PMC11458519 DOI: 10.3389/fonc.2024.1417862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 07/11/2024] [Indexed: 10/10/2024] Open
Abstract
Introduction Colorectal cancer (CRC) is one of the main causes of deaths worldwide. Early detection and diagnosis of its precursor lesion, the polyp, is key to reduce its mortality and to improve procedure efficiency. During the last two decades, several computational methods have been proposed to assist clinicians in detection, segmentation and classification tasks but the lack of a common public validation framework makes it difficult to determine which of them is ready to be deployed in the exploration room. Methods This study presents a complete validation framework and we compare several methodologies for each of the polyp characterization tasks. Results Results show that the majority of the approaches are able to provide good performance for the detection and segmentation task, but that there is room for improvement regarding polyp classification. Discussion While studied show promising results in the assistance of polyp detection and segmentation tasks, further research should be done in classification task to obtain reliable results to assist the clinicians during the procedure. The presented framework provides a standarized method for evaluating and comparing different approaches, which could facilitate the identification of clinically prepared assisting methods.
Collapse
Affiliation(s)
- Yael Tudela
- Computer Vision Center and Computer Science Department, Universitat Autònoma de Cerdanyola del Valles, Barcelona, Spain
| | - Mireia Majó
- Computer Vision Center and Computer Science Department, Universitat Autònoma de Cerdanyola del Valles, Barcelona, Spain
| | - Neil de la Fuente
- Computer Vision Center and Computer Science Department, Universitat Autònoma de Cerdanyola del Valles, Barcelona, Spain
| | - Adrian Galdran
- Department of Information and Communication Technologies, SymBioSys Research Group, BCNMedTech, Barcelona, Spain
| | - Adrian Krenzer
- Artificial Intelligence and Knowledge Systems, Institute for Computer Science, Julius-Maximilians University of Würzburg, Würzburg, Germany
| | - Frank Puppe
- Artificial Intelligence and Knowledge Systems, Institute for Computer Science, Julius-Maximilians University of Würzburg, Würzburg, Germany
| | - Amine Yamlahi
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Thuy Nuong Tran
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Bogdan J. Matuszewski
- Computer Vision and Machine Learning (CVML) Research Group, University of Central Lancashir (UCLan), Preston, United Kingdom
| | - Kerr Fitzgerald
- Computer Vision and Machine Learning (CVML) Research Group, University of Central Lancashir (UCLan), Preston, United Kingdom
| | - Cheng Bian
- Hebei University of Technology, Baoding, China
| | | | - Shijle Liu
- Hebei University of Technology, Baoding, China
| | | | - Aymeric Histace
- ETIS UMR 8051, École Nationale Supérieure de l'Électronique et de ses Applications (ENSEA), Centre national de la recherche scientifique (CNRS), CY Paris Cergy University, Cergy, France
| | - Jorge Bernal
- Computer Vision Center and Computer Science Department, Universitat Autònoma de Cerdanyola del Valles, Barcelona, Spain
| |
Collapse
|
3
|
Wan JJ, Zhu PC, Chen BL, Yu YT. A semantic feature enhanced YOLOv5-based network for polyp detection from colonoscopy images. Sci Rep 2024; 14:15478. [PMID: 38969765 PMCID: PMC11226707 DOI: 10.1038/s41598-024-66642-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 07/03/2024] [Indexed: 07/07/2024] Open
Abstract
Colorectal cancer (CRC) is a common digestive system tumor with high morbidity and mortality worldwide. At present, the use of computer-assisted colonoscopy technology to detect polyps is relatively mature, but it still faces some challenges, such as missed or false detection of polyps. Therefore, how to improve the detection rate of polyps more accurately is the key to colonoscopy. To solve this problem, this paper proposes an improved YOLOv5-based cancer polyp detection method for colorectal cancer. The method is designed with a new structure called P-C3 incorporated into the backbone and neck network of the model to enhance the expression of features. In addition, a contextual feature augmentation module was introduced to the bottom of the backbone network to increase the receptive field for multi-scale feature information and to focus on polyp features by coordinate attention mechanism. The experimental results show that compared with some traditional target detection algorithms, the model proposed in this paper has significant advantages for the detection accuracy of polyp, especially in the recall rate, which largely solves the problem of missed detection of polyps. This study will contribute to improve the polyp/adenoma detection rate of endoscopists in the process of colonoscopy, and also has important significance for the development of clinical work.
Collapse
Affiliation(s)
- Jing-Jing Wan
- Department of Gastroenterology, The Second People's Hospital of Huai'an, The Affiliated Huai'an Hospital of Xuzhou Medical University, Huaian, 223023, Jiangsu, China.
| | - Peng-Cheng Zhu
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China.
| | - Bo-Lun Chen
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| | - Yong-Tao Yu
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| |
Collapse
|
4
|
Sahafi A, Koulaouzidis A, Lalinia M. Polypoid Lesion Segmentation Using YOLO-V8 Network in Wireless Video Capsule Endoscopy Images. Diagnostics (Basel) 2024; 14:474. [PMID: 38472946 DOI: 10.3390/diagnostics14050474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/26/2024] [Accepted: 02/19/2024] [Indexed: 03/14/2024] Open
Abstract
Gastrointestinal (GI) tract disorders are a significant public health issue. They are becoming more common and can cause serious health problems and high healthcare costs. Small bowel tumours (SBTs) and colorectal cancer (CRC) are both becoming more prevalent, especially among younger adults. Early detection and removal of polyps (precursors of malignancy) is essential for prevention. Wireless Capsule Endoscopy (WCE) is a procedure that utilises swallowable camera devices that capture images of the GI tract. Because WCE generates a large number of images, automated polyp segmentation is crucial. This paper reviews computer-aided approaches to polyp detection using WCE imagery and evaluates them using a dataset of labelled anomalies and findings. The study focuses on YOLO-V8, an improved deep learning model, for polyp segmentation and finds that it performs better than existing methods, achieving high precision and recall. The present study underscores the potential of automated detection systems in improving GI polyp identification.
Collapse
Affiliation(s)
- Ali Sahafi
- Department of Mechanical and Electrical Engineering, Digital and High-Frequency Electronics Section, University of Southern Denmark, 5230 Odense, Denmark
| | - Anastasios Koulaouzidis
- Surgical Research Unit, Odense University Hospital, 5000 Svendborg, Denmark
- Department of Clinical Research, University of Southern Denmark, 5230 Odense, Denmark
- Department of Medicine, OUH Svendborg Sygehus, 5700 Svendborg, Denmark
- Department of Social Medicine and Public Health, Pomeranian Medical University, 70204 Szczecin, Poland
| | - Mehrshad Lalinia
- Department of Mechanical and Electrical Engineering, Digital and High-Frequency Electronics Section, University of Southern Denmark, 5230 Odense, Denmark
| |
Collapse
|
5
|
Zhu S, Gao J, Liu L, Yin M, Lin J, Xu C, Xu C, Zhu J. Public Imaging Datasets of Gastrointestinal Endoscopy for Artificial Intelligence: a Review. J Digit Imaging 2023; 36:2578-2601. [PMID: 37735308 PMCID: PMC10584770 DOI: 10.1007/s10278-023-00844-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 05/03/2023] [Accepted: 05/03/2023] [Indexed: 09/23/2023] Open
Abstract
With the advances in endoscopic technologies and artificial intelligence, a large number of endoscopic imaging datasets have been made public to researchers around the world. This study aims to review and introduce these datasets. An extensive literature search was conducted to identify appropriate datasets in PubMed, and other targeted searches were conducted in GitHub, Kaggle, and Simula to identify datasets directly. We provided a brief introduction to each dataset and evaluated the characteristics of the datasets included. Moreover, two national datasets in progress were discussed. A total of 40 datasets of endoscopic images were included, of which 34 were accessible for use. Basic and detailed information on each dataset was reported. Of all the datasets, 16 focus on polyps, and 6 focus on small bowel lesions. Most datasets (n = 16) were constructed by colonoscopy only, followed by normal gastrointestinal endoscopy and capsule endoscopy (n = 9). This review may facilitate the usage of public dataset resources in endoscopic research.
Collapse
Affiliation(s)
- Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Jingwen Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Lu Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Minyue Yin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Chang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Chunfang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China.
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China.
| |
Collapse
|
6
|
Bian H, Jiang M, Qian J. The investigation of constraints in implementing robust AI colorectal polyp detection for sustainable healthcare system. PLoS One 2023; 18:e0288376. [PMID: 37437026 DOI: 10.1371/journal.pone.0288376] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 06/24/2023] [Indexed: 07/14/2023] Open
Abstract
Colorectal cancer (CRC) is one of the significant threats to public health and the sustainable healthcare system during urbanization. As the primary method of screening, colonoscopy can effectively detect polyps before they evolve into cancerous growths. However, the current visual inspection by endoscopists is insufficient in providing consistently reliable polyp detection for colonoscopy videos and images in CRC screening. Artificial Intelligent (AI) based object detection is considered as a potent solution to overcome visual inspection limitations and mitigate human errors in colonoscopy. This study implemented a YOLOv5 object detection model to investigate the performance of mainstream one-stage approaches in colorectal polyp detection. Meanwhile, a variety of training datasets and model structure configurations are employed to identify the determinative factors in practical applications. The designed experiments show that the model yields acceptable results assisted by transfer learning, and highlight that the primary constraint in implementing deep learning polyp detection comes from the scarcity of training data. The model performance was improved by 15.6% in terms of average precision (AP) when the original training dataset was expanded. Furthermore, the experimental results were analysed from a clinical perspective to identify potential causes of false positives. Besides, the quality management framework is proposed for future dataset preparation and model development in AI-driven polyp detection tasks for smart healthcare solutions.
Collapse
Affiliation(s)
- Haitao Bian
- College of Safety Science and Engineering, Nanjing Tech University, Nanjing, Jiangsu, China
| | - Min Jiang
- KLA Corporation, Milpitas, California, United States of America
| | - Jingjing Qian
- Department of Gastroenterology, The Second Hospital of Nanjing, Nanjing University of Chinese Medicine, Nanjing, Jiangsu, China
| |
Collapse
|
7
|
Shen MH, Huang CC, Chen YT, Tsai YJ, Liou FM, Chang SC, Phan NN. Deep Learning Empowers Endoscopic Detection and Polyps Classification: A Multiple-Hospital Study. Diagnostics (Basel) 2023; 13:diagnostics13081473. [PMID: 37189575 DOI: 10.3390/diagnostics13081473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 04/03/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
The present study aimed to develop an AI-based system for the detection and classification of polyps using colonoscopy images. A total of about 256,220 colonoscopy images from 5000 colorectal cancer patients were collected and processed. We used the CNN model for polyp detection and the EfficientNet-b0 model for polyp classification. Data were partitioned into training, validation and testing sets, with a 70%, 15% and 15% ratio, respectively. After the model was trained/validated/tested, to evaluate its performance rigorously, we conducted a further external validation using both prospective (n = 150) and retrospective (n = 385) approaches for data collection from 3 hospitals. The deep learning model performance with the testing set reached a state-of-the-art sensitivity and specificity of 0.9709 (95% CI: 0.9646-0.9757) and 0.9701 (95% CI: 0.9663-0.9749), respectively, for polyp detection. The polyp classification model attained an AUC of 0.9989 (95% CI: 0.9954-1.00). The external validation from 3 hospital results achieved 0.9516 (95% CI: 0.9295-0.9670) with the lesion-based sensitivity and a frame-based specificity of 0.9720 (95% CI: 0.9713-0.9726) for polyp detection. The model achieved an AUC of 0.9521 (95% CI: 0.9308-0.9734) for polyp classification. The high-performance, deep-learning-based system could be used in clinical practice to facilitate rapid, efficient and reliable decisions by physicians and endoscopists.
Collapse
Affiliation(s)
- Ming-Hung Shen
- Department of Surgery, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City 24205, Taiwan
- School of Medicine, College of Medicine, Fu Jen Catholic University, New Taipei City 24205, Taiwan
| | - Chi-Cheng Huang
- Department of Surgery, Taipei Veterans General Hospital, Taipei City 11217, Taiwan
- Institute of Epidemiology and Preventive Medicine, College of Public Health, National Taiwan University, Taipei City 10663, Taiwan
| | - Yu-Tsung Chen
- Department of Internal Medicine, Fu Jen Catholic University Hospital, New Taipei City 24205, Taiwan
| | - Yi-Jian Tsai
- Division of Colorectal Surgery, Department of Surgery, Fu Jen Catholic University Hospital, New Taipei City 24205, Taiwan
- Graduate Institute of Biomedical Electronics and Bioinformatics, Department of Electrical Engineering, National Taiwan University, Taipei City 10663, Taiwan
| | | | - Shih-Chang Chang
- Division of Colorectal Surgery, Department of Surgery, Cathay General Hospital, Taipei City 106443, Taiwan
| | - Nam Nhut Phan
- Bioinformatics and Biostatistics Core, Centre of Genomic and Precision Medicine, National Taiwan University, Taipei City 10055, Taiwan
| |
Collapse
|
8
|
Shahid B, Abbas M, Ur Rehman A, Ul Abideen Z. IAPC2: Improved and Automatic Classification of Polyp for Colorectal Cancer. 2023 INTERNATIONAL CONFERENCE ON BUSINESS ANALYTICS FOR TECHNOLOGY AND SECURITY (ICBATS) 2023. [DOI: 10.1109/icbats57792.2023.10111431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Affiliation(s)
- Bisma Shahid
- Riphah International University,Department of Computer Science,Lahore,Pakistan
| | - Maria Abbas
- Riphah International University,Department of Computer Science,Lahore,Pakistan
| | - Abd Ur Rehman
- Riphah International University,Department of Computer Science,Lahore,Pakistan
| | | |
Collapse
|
9
|
Schulz D, Heilmaier M, Phillip V, Treiber M, Mayr U, Lahmer T, Mueller J, Demir IE, Friess H, Reichert M, Schmid RM, Abdelhafez M. Accurate prediction of histological grading of intraductal papillary mucinous neoplasia using deep learning. Endoscopy 2023; 55:415-422. [PMID: 36323331 DOI: 10.1055/a-1971-1274] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
BACKGROUND Risk stratification and recommendation for surgery for intraductal papillary mucinous neoplasm (IPMN) are currently based on consensus guidelines. Risk stratification from presurgery histology is only potentially decisive owing to the low sensitivity of fine-needle aspiration. In this study, we developed and validated a deep learning-based method to distinguish between IPMN with low grade dysplasia and IPMN with high grade dysplasia/invasive carcinoma using endoscopic ultrasound (EUS) images. METHODS For model training, we acquired a total of 3355 EUS images from 43 patients who underwent pancreatectomy from March 2015 to August 2021. All patients had histologically proven IPMN. We used transfer learning to fine-tune a convolutional neural network and to classify "low grade IPMN" from "high grade IPMN/invasive carcinoma." Our test set consisted of 1823 images from 27 patients, recruiting 11 patients retrospectively, 7 patients prospectively, and 9 patients externally. We compared our results with the prediction based on international consensus guidelines. RESULTS Our approach could classify low grade from high grade/invasive carcinoma in the test set with an accuracy of 99.6 % (95 %CI 99.5 %-99.9 %). Our deep learning model achieved superior accuracy in prediction of the histological outcome compared with any individual guideline, which have accuracies between 51.8 % (95 %CI 31.9 %-71.3 %) and 70.4 % (95 %CI 49.8-86.2). CONCLUSION This pilot study demonstrated that deep learning in IPMN-EUS images can predict the histological outcome with high accuracy.
Collapse
Affiliation(s)
- Dominik Schulz
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Markus Heilmaier
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Veit Phillip
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Matthias Treiber
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Ulrich Mayr
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Tobias Lahmer
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Julius Mueller
- Klinik für Innere Medizin II, Universitätsklinikum Freiburg, Freiburg, Germany
| | - Ihsan Ekin Demir
- Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Helmut Friess
- Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Maximilian Reichert
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany.,German Cancer Consortium (DKTK), Partner Site Munich, Munich, Germany
| | - Roland M Schmid
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany.,German Cancer Consortium (DKTK), Partner Site Munich, Munich, Germany
| | - Mohamed Abdelhafez
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| |
Collapse
|
10
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
11
|
Abbas A, Gaber MM, Abdelsamea MM. XDecompo: Explainable Decomposition Approach in Convolutional Neural Networks for Tumour Image Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:9875. [PMID: 36560243 PMCID: PMC9782528 DOI: 10.3390/s22249875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/06/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
Of the various tumour types, colorectal cancer and brain tumours are still considered among the most serious and deadly diseases in the world. Therefore, many researchers are interested in improving the accuracy and reliability of diagnostic medical machine learning models. In computer-aided diagnosis, self-supervised learning has been proven to be an effective solution when dealing with datasets with insufficient data annotations. However, medical image datasets often suffer from data irregularities, making the recognition task even more challenging. The class decomposition approach has provided a robust solution to such a challenging problem by simplifying the learning of class boundaries of a dataset. In this paper, we propose a robust self-supervised model, called XDecompo, to improve the transferability of features from the pretext task to the downstream task. XDecompo has been designed based on an affinity propagation-based class decomposition to effectively encourage learning of the class boundaries in the downstream task. XDecompo has an explainable component to highlight important pixels that contribute to classification and explain the effect of class decomposition on improving the speciality of extracted features. We also explore the generalisability of XDecompo in handling different medical datasets, such as histopathology for colorectal cancer and brain tumour images. The quantitative results demonstrate the robustness of XDecompo with high accuracy of 96.16% and 94.30% for CRC and brain tumour images, respectively. XDecompo has demonstrated its generalization capability and achieved high classification accuracy (both quantitatively and qualitatively) in different medical image datasets, compared with other models. Moreover, a post hoc explainable method has been used to validate the feature transferability, demonstrating highly accurate feature representations.
Collapse
Affiliation(s)
- Asmaa Abbas
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK
| | - Mohamed Medhat Gaber
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| | - Mohammed M. Abdelsamea
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK
- Department of Computer Science, Faculty of Computers and Information, University of Assiut, Assiut 71515, Egypt
| |
Collapse
|
12
|
Basso MN, Barua M, Meyer J, John R, Khademi A. Machine learning in renal pathology. FRONTIERS IN NEPHROLOGY 2022; 2:1007002. [PMID: 37675000 PMCID: PMC10479587 DOI: 10.3389/fneph.2022.1007002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 11/09/2022] [Indexed: 09/08/2023]
Abstract
Introduction When assessing kidney biopsies, pathologists use light microscopy, immunofluorescence, and electron microscopy to describe and diagnose glomerular lesions and diseases. These methods can be laborious, costly, fraught with inter-observer variability, and can have delays in turn-around time. Thus, computational approaches can be designed as screening and/or diagnostic tools, potentially relieving pathologist time, healthcare resources, while also having the ability to identify novel biomarkers, including subvisual features. Methods Here, we implement our recently published biomarker feature extraction (BFE) model along with 3 pre-trained deep learning models (VGG16, VGG19, and InceptionV3) to diagnose 3 glomerular diseases using PAS-stained digital pathology images alone. The BFE model extracts a panel of 233 explainable features related to underlying pathology, which are subsequently narrowed down to 10 morphological and microstructural texture features for classification with a linear discriminant analysis machine learning classifier. 45 patient renal biopsies (371 glomeruli) from minimal change disease (MCD), membranous nephropathy (MN), and thin-basement membrane nephropathy (TBMN) were split into training/validation and held out sets. For the 3 deep learningmodels, data augmentation and Grad-CAM were used for better performance and interpretability. Results The BFE model showed glomerular validation accuracy of 67.6% and testing accuracy of 76.8%. All deep learning approaches had higher validation accuracies (most for VGG16 at 78.5%) but lower testing accuracies. The highest testing accuracy at the glomerular level was VGG16 at 71.9%, while at the patient-level was InceptionV3 at 73.3%. Discussion The results highlight the potential of both traditional machine learning and deep learning-based approaches for kidney biopsy evaluation.
Collapse
Affiliation(s)
- Matthew Nicholas Basso
- Image Analysis in Medicine Lab (IAMLAB), Department of Electrical, Computer, and Biomedical Engineering, Ryerson University, Toronto, ON, Canada
| | - Moumita Barua
- Division of Nephrology, University Health Network, Toronto, ON, Canada
- Toronto General Hospital Research Institute, Toronto General Hospital, Toronto, ON, Canada
- Department of Medicine, University of Toronto, Toronto, ON, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada
| | - Julien Meyer
- School of Health Services Management, Ryerson University, Toronto, ON, Canada
| | - Rohan John
- Department of Pathology, University Health Network, Toronto, ON, Canada
| | - April Khademi
- Image Analysis in Medicine Lab (IAMLAB), Department of Electrical, Computer, and Biomedical Engineering, Ryerson University, Toronto, ON, Canada
- Keenan Research Center for Biomedical Science, St. Michael’s Hospital, Unity Health Network, Toronto, ON, Canada
- Institute for Biomedical Engineering, Science, and Technology (iBEST), a partnership between St. Michael’s Hospital and Ryerson University, Toronto, ON, Canada
| |
Collapse
|
13
|
Tharwat M, Sakr NA, El-Sappagh S, Soliman H, Kwak KS, Elmogy M. Colon Cancer Diagnosis Based on Machine Learning and Deep Learning: Modalities and Analysis Techniques. SENSORS (BASEL, SWITZERLAND) 2022; 22:9250. [PMID: 36501951 PMCID: PMC9739266 DOI: 10.3390/s22239250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
The treatment and diagnosis of colon cancer are considered to be social and economic challenges due to the high mortality rates. Every year, around the world, almost half a million people contract cancer, including colon cancer. Determining the grade of colon cancer mainly depends on analyzing the gland's structure by tissue region, which has led to the existence of various tests for screening that can be utilized to investigate polyp images and colorectal cancer. This article presents a comprehensive survey on the diagnosis of colon cancer. This covers many aspects related to colon cancer, such as its symptoms and grades as well as the available imaging modalities (particularly, histopathology images used for analysis) in addition to common diagnosis systems. Furthermore, the most widely used datasets and performance evaluation metrics are discussed. We provide a comprehensive review of the current studies on colon cancer, classified into deep-learning (DL) and machine-learning (ML) techniques, and we identify their main strengths and limitations. These techniques provide extensive support for identifying the early stages of cancer that lead to early treatment of the disease and produce a lower mortality rate compared with the rate produced after symptoms develop. In addition, these methods can help to prevent colorectal cancer from progressing through the removal of pre-malignant polyps, which can be achieved using screening tests to make the disease easier to diagnose. Finally, the existing challenges and future research directions that open the way for future work in this field are presented.
Collapse
Affiliation(s)
- Mai Tharwat
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Nehal A. Sakr
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Shaker El-Sappagh
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13512, Egypt
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| | - Hassan Soliman
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Kyung-Sup Kwak
- Department of Information and Communication Engineering, Inha University, Incheon 22212, Republic of Korea
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| |
Collapse
|
14
|
Parkash O, Siddiqui ATS, Jiwani U, Rind F, Padhani ZA, Rizvi A, Hoodbhoy Z, Das JK. Diagnostic accuracy of artificial intelligence for detecting gastrointestinal luminal pathologies: A systematic review and meta-analysis. Front Med (Lausanne) 2022; 9:1018937. [PMID: 36405592 PMCID: PMC9672666 DOI: 10.3389/fmed.2022.1018937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/03/2022] [Indexed: 11/06/2022] Open
Abstract
Background Artificial Intelligence (AI) holds considerable promise for diagnostics in the field of gastroenterology. This systematic review and meta-analysis aims to assess the diagnostic accuracy of AI models compared with the gold standard of experts and histopathology for the diagnosis of various gastrointestinal (GI) luminal pathologies including polyps, neoplasms, and inflammatory bowel disease. Methods We searched PubMed, CINAHL, Wiley Cochrane Library, and Web of Science electronic databases to identify studies assessing the diagnostic performance of AI models for GI luminal pathologies. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. We performed a meta-analysis and hierarchical summary receiver operating characteristic curves (HSROC). The risk of bias was assessed using Quality Assessment for Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Subgroup analyses were conducted based on the type of GI luminal disease, AI model, reference standard, and type of data used for analysis. This study is registered with PROSPERO (CRD42021288360). Findings We included 73 studies, of which 31 were externally validated and provided sufficient information for inclusion in the meta-analysis. The overall sensitivity of AI for detecting GI luminal pathologies was 91.9% (95% CI: 89.0–94.1) and specificity was 91.7% (95% CI: 87.4–94.7). Deep learning models (sensitivity: 89.8%, specificity: 91.9%) and ensemble methods (sensitivity: 95.4%, specificity: 90.9%) were the most commonly used models in the included studies. Majority of studies (n = 56, 76.7%) had a high risk of selection bias while 74% (n = 54) studies were low risk on reference standard and 67% (n = 49) were low risk for flow and timing bias. Interpretation The review suggests high sensitivity and specificity of AI models for the detection of GI luminal pathologies. There is a need for large, multi-center trials in both high income countries and low- and middle- income countries to assess the performance of these AI models in real clinical settings and its impact on diagnosis and prognosis. Systematic review registration [https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=288360], identifier [CRD42021288360].
Collapse
Affiliation(s)
- Om Parkash
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | | | - Uswa Jiwani
- Center of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Fahad Rind
- Head and Neck Oncology, The Ohio State University, Columbus, OH, United States
| | - Zahra Ali Padhani
- Institute for Global Health and Development, Aga Khan University, Karachi, Pakistan
| | - Arjumand Rizvi
- Center of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Zahra Hoodbhoy
- Department of Pediatrics and Child Health, Aga Khan University, Karachi, Pakistan
| | - Jai K. Das
- Institute for Global Health and Development, Aga Khan University, Karachi, Pakistan
- Department of Pediatrics and Child Health, Aga Khan University, Karachi, Pakistan
- *Correspondence: Jai K. Das,
| |
Collapse
|
15
|
Wang W, Huang W, Wang X, Zhang P, Zhang N. A COVID-19 CXR image recognition method based on MSA-DDCovidNet. IET IMAGE PROCESSING 2022; 16:2101-2113. [PMID: 35601273 PMCID: PMC9111165 DOI: 10.1049/ipr2.12474] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 11/06/2021] [Accepted: 02/28/2022] [Indexed: 06/15/2023]
Abstract
Currently, coronavirus disease 2019 (COVID-19) has not been contained. It is a safe and effective way to detect infected persons in chest X-ray (CXR) images based on deep learning methods. To solve the above problem, the dual-path multi-scale fusion (DMFF) module and dense dilated depth-wise separable (D3S) module are used to extract shallow and deep features, respectively. Based on these two modules and multi-scale spatial attention (MSA) mechanism, a lightweight convolutional neural network model, MSA-DDCovidNet, is designed. Experimental results show that the accuracy of the MSA-DDCovidNet model on COVID-19 CXR images is as high as 97.962%, In addition, the proposed MSA-DDCovidNet has less computation complexity and fewer parameter numbers. Compared with other methods, MSA-DDCovidNet can help diagnose COVID-19 more quickly and accurately.
Collapse
Affiliation(s)
- Wei Wang
- School of Computer and Communication EngineeringChangsha University of Science and TechnologyChangshaChina
| | - Wendi Huang
- School of Computer and Communication EngineeringChangsha University of Science and TechnologyChangshaChina
| | - Xin Wang
- School of Computer and Communication EngineeringChangsha University of Science and TechnologyChangshaChina
| | - Peng Zhang
- School of Electronics and Communications EngineeringSun Yat‐sen UniversityShenzhenChina
| | - Nian Zhang
- Department of Electrical and Computer EngineeringUniversity of the District of ColumbiaWashingtonDCUSA
| |
Collapse
|
16
|
Nogueira-Rodríguez A, Reboiro-Jato M, Glez-Peña D, López-Fernández H. Performance of Convolutional Neural Networks for Polyp Localization on Public Colonoscopy Image Datasets. Diagnostics (Basel) 2022; 12:898. [PMID: 35453946 PMCID: PMC9027927 DOI: 10.3390/diagnostics12040898] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/31/2022] [Accepted: 04/01/2022] [Indexed: 01/10/2023] Open
Abstract
Colorectal cancer is one of the most frequent malignancies. Colonoscopy is the de facto standard for precancerous lesion detection in the colon, i.e., polyps, during screening studies or after facultative recommendation. In recent years, artificial intelligence, and especially deep learning techniques such as convolutional neural networks, have been applied to polyp detection and localization in order to develop real-time CADe systems. However, the performance of machine learning models is very sensitive to changes in the nature of the testing instances, especially when trying to reproduce results for totally different datasets to those used for model development, i.e., inter-dataset testing. Here, we report the results of testing of our previously published polyp detection model using ten public colonoscopy image datasets and analyze them in the context of the results of other 20 state-of-the-art publications using the same datasets. The F1-score of our recently published model was 0.88 when evaluated on a private test partition, i.e., intra-dataset testing, but it decayed, on average, by 13.65% when tested on ten public datasets. In the published research, the average intra-dataset F1-score is 0.91, and we observed that it also decays in the inter-dataset setting to an average F1-score of 0.83.
Collapse
Affiliation(s)
- Alba Nogueira-Rodríguez
- CINBIO, Department of Computer Science, ESEI-Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain; (A.N.-R.); (M.R.-J.); (D.G.-P.)
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Miguel Reboiro-Jato
- CINBIO, Department of Computer Science, ESEI-Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain; (A.N.-R.); (M.R.-J.); (D.G.-P.)
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Daniel Glez-Peña
- CINBIO, Department of Computer Science, ESEI-Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain; (A.N.-R.); (M.R.-J.); (D.G.-P.)
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Hugo López-Fernández
- CINBIO, Department of Computer Science, ESEI-Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain; (A.N.-R.); (M.R.-J.); (D.G.-P.)
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| |
Collapse
|
17
|
Nisha JS, Gopi VP, Palanisamy P. CLASSIFICATION OF INFORMATIVE FRAMES IN COLONOSCOPY VIDEO BASED ON IMAGE ENHANCEMENT AND PHOG FEATURE EXTRACTION. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2022. [DOI: 10.4015/s1016237222500156] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Colonoscopy allows doctors to check the abnormalities in the intestinal tract without any surgical operations. The major problem in the Computer-Aided Diagnosis (CAD) of colonoscopy images is the low illumination condition of the images. This study aims to provide an image enhancement method and feature extraction and classification techniques for detecting polyps in colonoscopy images. We propose a novel image enhancement method with a Pyramid Histogram of Oriented Gradients (PHOG) feature extractor to detect polyps in the colonoscopy images. The approach is evaluated across different classifiers, such as Multi-Layer Perceptron (MLP), Adaboost, Support Vector Machine (SVM), and Random Forest. The proposed method has been trained using the publicly available databases CVC ClinicDB and tested in ETIS Larib and CVC ColonDB. The proposed approach outperformed the existing state-of-the-art methods on both databases. The reliability of the classifiers’ performance was examined by comparing their F1 score, precision, F2 score, recall, and accuracy. PHOG with Random Forest classifier outperformed the existing methods in terms of recall of 97.95%, precision 98.46%, F1 score 98.20%, F2 score of 98.00%, and accuracy of 98.21% in the CVC-ColonDB. In the ETIS-LARIB dataset it attained a recall value of 96.83%, precision 98.65%, F1 score 97.73%, F2 score 98.59%, and accuracy of 97.75%. We observed that the proposed image enhancement method with PHOG feature extraction and the Random Forest classifier will help doctors to evaluate and analyze anomalies from colonoscopy data and make decisions quickly.
Collapse
Affiliation(s)
- J. S. Nisha
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - Varun P. Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - P. Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| |
Collapse
|
18
|
Avci SN, Isiktas G, Berber E. A Visual Deep Learning Model to Localize Parathyroid-Specific Autofluorescence on Near-Infrared Imaging : Localization of Parathyroid Autofluorescence with Deep Learning. Ann Surg Oncol 2022; 29:10.1245/s10434-022-11632-y. [PMID: 35348975 DOI: 10.1245/s10434-022-11632-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 01/26/2022] [Indexed: 02/21/2024]
Abstract
BACKGROUND AND PURPOSE Parathyroid glands may be detected by their autofluorescence on near-infrared imaging. Nevertheless, recognition of parathyroid-specific autofluorescence requires a learning curve, with other unrelated bright signals causing confusion. The aim of this study was to find out whether machine learning could be used to facilitate identification of parathyroid-specific autofluorescence signals on intraoperative near-infrared images in patients undergoing thyroidectomy and parathyroidectomy procedures. METHODS In an institutional review board-approved study, intraoperative near-infrared images of patients who underwent thyroidectomy and/or parathyroidectomy procedures within a year were used to develop an artificial intelligence model. Parathyroid-specific autofluorescence signals were marked with rectangles on intraoperative near-infrared still images and used for training a deep learning model. A randomly chosen 80% of the data were used for training, 10% for testing, and 10% for validation. Precision and recall of the model were calculated. RESULTS A total of 466 intraoperative near-infrared images of 197 patients who underwent thyroidectomy and/or parathyroidectomy procedures were analyzed. Procedures included total thyroidectomy in 54 patients, thyroid lobectomy in 24 patients, parathyroidectomy in 108 patients, and combined thyroidectomy and parathyroidectomy procedures in 11 patients. The overall recall and precision of the model were 90.5 and 95.7%, respectively. CONCLUSIONS To our knowledge, this is the first study that describes the use of artificial intelligence tools to assist in recognition of parathyroid-specific autofluorescence signals on near-infrared imaging. The model developed may have utility in facilitating training and decreasing the learning curve associated with the use of this technology.
Collapse
Affiliation(s)
- Seyma Nazli Avci
- Department of Endocrine Surgery, Cleveland Clinic, Cleveland, Ohio, USA
| | - Gizem Isiktas
- Department of Endocrine Surgery, Cleveland Clinic, Cleveland, Ohio, USA
| | - Eren Berber
- Department of Endocrine Surgery, Cleveland Clinic, Cleveland, Ohio, USA.
- Department of General Surgery, Cleveland Clinic, Cleveland, Ohio, USA.
| |
Collapse
|
19
|
Nisha J, P. Gopi V, Palanisamy P. Automated colorectal polyp detection based on image enhancement and dual-path CNN architecture. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103465] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
20
|
Medical Image Classification Based on Information Interaction Perception Mechanism. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:8429899. [PMID: 34912447 PMCID: PMC8668365 DOI: 10.1155/2021/8429899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 11/12/2021] [Indexed: 12/18/2022]
Abstract
Colorectal cancer originates from adenomatous polyps. Adenomatous polyps start out as benign, but over time they can become malignant and even lead to complications and death which will spread to adherent and surrounding organs over time, such as lymph nodes, liver, or lungs, eventually leading to complications and death. Factors such as operator's experience shortage and visual fatigue will directly affect the diagnostic accuracy of colonoscopy. To relieve the pressure on medical imaging personnel, this paper proposed a network model for colonic polyp detection using colonoscopy images. Considering the unnoticeable surface texture of colonic polyps, this paper designed a channel information interaction perception (CIIP) module. Based on this module, an information interaction perception network (IIP-Net) is proposed. In order to improve the accuracy of classification and reduce the cost of calculation, the network used three classifiers for classification: fully connected (FC) structure, global average pooling fully connected (GAP-FC) structure, and convolution global average pooling (C-GAP) structure. We evaluated the performance of IIP-Net by randomly selecting colonoscopy images from a gastroscopy database. The experimental results showed that the overall accuracy of IIP-NET54-GAP-FC module is 99.59%, and the accuracy of colonic polyp is 99.40%. By contrast, our IIP-NET54-GAP-FC performed extremely well.
Collapse
|
21
|
Wang X, Hu Y, Luo Y, Wang W. D2-CovidNet: A Deep Learning Model for COVID-19 Detection in Chest X-Ray Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:9952109. [PMID: 34925500 PMCID: PMC8674084 DOI: 10.1155/2021/9952109] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Accepted: 11/17/2021] [Indexed: 01/19/2023]
Abstract
Since the outbreak of Coronavirus disease 2019 (COVID-19), it has been spreading rapidly worldwide and has not yet been effectively controlled. Many researchers are studying novel Coronavirus pneumonia from chest X-ray images. In order to improve the detection accuracy, two modules sensitive to feature information, dual-path multiscale feature fusion module and dense depthwise separable convolution module, are proposed. Based on these two modules, a lightweight convolutional neural network model, D2-CovidNet, is designed to assist experts in diagnosing COVID-19 by identifying chest X-ray images. D2-CovidNet is tested on two public data sets, and its classification accuracy, precision, sensitivity, specificity, and F1-score are 94.56%, 95.14%, 94.02%, 96.61%, and 95.30%, respectively. Specifically, the precision, sensitivity, and specificity of the network for COVID-19 are 98.97%, 94.12%, and 99.84%, respectively. D2-CovidNet has fewer computation number and parameter number. Compared with other methods, D2-CovidNet can help diagnose COVID-19 more quickly and accurately.
Collapse
Affiliation(s)
- Xin Wang
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China
| | - Yiyang Hu
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China
| | - Yanhong Luo
- Hunan Children's Hospital, Changsha 410000, China
| | - Wei Wang
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China
| |
Collapse
|
22
|
Polyp Detection from Colorectum Images by Using Attentive YOLOv5. Diagnostics (Basel) 2021; 11:diagnostics11122264. [PMID: 34943501 PMCID: PMC8700704 DOI: 10.3390/diagnostics11122264] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Revised: 11/26/2021] [Accepted: 11/30/2021] [Indexed: 01/05/2023] Open
Abstract
Background: High-quality colonoscopy is essential to prevent the occurrence of colorectal cancers. The data of colonoscopy are mainly stored in the form of images. Therefore, artificial intelligence-assisted colonoscopy based on medical images is not only a research hotspot, but also one of the effective auxiliary means to improve the detection rate of adenomas. This research has become the focus of medical institutions and scientific research departments and has important clinical and scientific research value. Methods: In this paper, we propose a YOLOv5 model based on a self-attention mechanism for polyp target detection. This method uses the idea of regression, using the entire image as the input of the network and directly returning the target frame of this position in multiple positions of the image. In the feature extraction process, an attention mechanism is added to enhance the contribution of information-rich feature channels and weaken the interference of useless channels; Results: The experimental results show that the method can accurately identify polyp images, especially for the small polyps and the polyps with inconspicuous contrasts, and the detection speed is greatly improved compared with the comparison algorithm. Conclusions: This study will be of great help in reducing the missed diagnosis of clinicians during endoscopy and treatment, and it is also of great significance to the development of clinicians’ clinical work.
Collapse
|
23
|
Viscaino M, Torres Bustos J, Muñoz P, Auat Cheein C, Cheein FA. Artificial intelligence for the early detection of colorectal cancer: A comprehensive review of its advantages and misconceptions. World J Gastroenterol 2021; 27:6399-6414. [PMID: 34720530 PMCID: PMC8517786 DOI: 10.3748/wjg.v27.i38.6399] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 04/26/2021] [Accepted: 09/14/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer (CRC) was the second-ranked worldwide type of cancer during 2020 due to the crude mortality rate of 12.0 per 100000 inhabitants. It can be prevented if glandular tissue (adenomatous polyps) is detected early. Colonoscopy has been strongly recommended as a screening test for both early cancer and adenomatous polyps. However, it has some limitations that include the high polyp miss rate for smaller (< 10 mm) or flat polyps, which are easily missed during visual inspection. Due to the rapid advancement of technology, artificial intelligence (AI) has been a thriving area in different fields, including medicine. Particularly, in gastroenterology AI software has been included in computer-aided systems for diagnosis and to improve the assertiveness of automatic polyp detection and its classification as a preventive method for CRC. This article provides an overview of recent research focusing on AI tools and their applications in the early detection of CRC and adenomatous polyps, as well as an insightful analysis of the main advantages and misconceptions in the field.
Collapse
Affiliation(s)
- Michelle Viscaino
- Department of Electronic Engineering, Universidad Tecnica Federico Santa Maria, Valpaiso 2340000, Chile
| | - Javier Torres Bustos
- Department of Electronic Engineering, Universidad Tecnica Federico Santa Maria, Valpaiso 2340000, Chile
| | - Pablo Muñoz
- Hospital Clinico, University of Chile, Santiago 8380456, Chile
| | - Cecilia Auat Cheein
- Facultad de Medicina, Universidad Nacional de Santiago del Estero, Santiago del Estero 4200, Argentina
| | - Fernando Auat Cheein
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaiso 2340000, Chile
| |
Collapse
|
24
|
Rahim T, Hassan SA, Shin SY. A deep convolutional neural network for the detection of polyps in colonoscopy images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102654] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
25
|
Liew WS, Tang TB, Lin CH, Lu CK. Automatic colonic polyp detection using integration of modified deep residual convolutional neural network and ensemble learning approaches. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 206:106114. [PMID: 33984661 DOI: 10.1016/j.cmpb.2021.106114] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 04/07/2021] [Indexed: 05/10/2023]
Abstract
BACKGROUND AND OBJECTIVE The increased incidence of colorectal cancer (CRC) and its mortality rate have attracted interest in the use of artificial intelligence (AI) based computer-aided diagnosis (CAD) tools to detect polyps at an early stage. Although these CAD tools have thus far achieved a good accuracy level to detect polyps, they still have room to improve further (e.g. sensitivity). Therefore, a new CAD tool is developed in this study to detect colonic polyps accurately. METHODS In this paper, we propose a novel approach to distinguish colonic polyps by integrating several techniques, including a modified deep residual network, principal component analysis and AdaBoost ensemble learning. A powerful deep residual network architecture, ResNet-50, was investigated to reduce the computational time by altering its architecture. To keep the interference to a minimum, median filter, image thresholding, contrast enhancement, and normalisation techniques were exploited on the endoscopic images to train the classification model. Three publicly available datasets, i.e., Kvasir, ETIS-LaribPolypDB, and CVC-ClinicDB, were merged to train the model, which included images with and without polyps. RESULTS The proposed approach trained with a combination of three datasets achieved Matthews Correlation Coefficient (MCC) of 0.9819 with accuracy, sensitivity, precision, and specificity of 99.10%, 98.82%, 99.37%, and 99.38%, respectively. CONCLUSIONS These results show that our method could repeatedly classify endoscopic images automatically and could be used to effectively develop computer-aided diagnostic tools for early CRC detection.
Collapse
Affiliation(s)
- Win Sheng Liew
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia
| | - Tong Boon Tang
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia
| | - Cheng-Hung Lin
- Department of Electrical Engineering and Biomedical Engineering Research Center, Yuan Ze University, Jungli 32003, Taiwan
| | - Cheng-Kai Lu
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia.
| |
Collapse
|
26
|
Detecting COVID-19 in Chest X-Ray Images via MCFF-Net. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:3604900. [PMID: 34239548 PMCID: PMC8214492 DOI: 10.1155/2021/3604900] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Accepted: 06/04/2021] [Indexed: 11/17/2022]
Abstract
COVID-19 is a respiratory disease caused by severe acute respiratory syndrome coronavirus (SARS-CoV-2). Due to the rapid spread of COVID-19 around the world, the number of COVID-19 cases continues to increase, and lots of countries are facing tremendous pressure on both public and medical resources. Although RT-PCR is the most widely used detection technology with COVID-19 detection, it still has some limitations, such as high cost, being time-consuming, and having low sensitivity. According to the characteristics of chest X-ray (CXR) images, we design the Parallel Channel Attention Feature Fusion Module (PCAF), as well as a new structure of convolutional neural network MCFF-Net proposed based on PCAF. In order to improve the recognition efficiency, the network adopts 3 classifiers: 1-FC, GAP-FC, and Conv1-GAP. The experimental results show that the overall accuracy of MCFF-Net66-Conv1-GAP model is 94.66% for 4-class classification. Simultaneously, the classification accuracy, precision, sensitivity, specificity, and F1-score of COVID-19 are 100%. MCFF-Net may not only assist clinicians in making appropriate decisions for COVID-19 diagnosis but also mitigate the lack of testing kits.
Collapse
|
27
|
Cao B, Zhang KC, Wei B, Chen L. Status quo and future prospects of artificial neural network from the perspective of gastroenterologists. World J Gastroenterol 2021; 27:2681-2709. [PMID: 34135549 PMCID: PMC8173384 DOI: 10.3748/wjg.v27.i21.2681] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/29/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN’s clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public.
Collapse
Affiliation(s)
- Bo Cao
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ke-Cheng Zhang
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Bo Wei
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Lin Chen
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| |
Collapse
|