1
|
Javed N, Ghazanfar H, Balar B, Patel H. Role of Artificial Intelligence in Endoscopic Intervention: A Clinical Review. J Community Hosp Intern Med Perspect 2024; 14:37-43. [PMID: 39036586 PMCID: PMC11259475 DOI: 10.55729/2000-9666.1341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 02/07/2024] [Accepted: 02/23/2024] [Indexed: 07/23/2024] Open
Abstract
Gastrointestinal diseases are increasing in global prevalence. As a result, the contribution to both mortality and healthcare costs is increasing. While interventions utilizing scoping techniques or ultrasound are crucial to both the timely diagnosis and management of illness, a few limitations are associated with these techniques. Artificial intelligence, using computerized diagnoses, deep learning systems, or neural networks, is increasingly being employed in multiple aspects of medicine to improve the characteristics and outcomes of these tools. Therefore, this review aims to discuss applications of artificial intelligence in endoscopy, colonoscopy, and endoscopic ultrasound.
Collapse
Affiliation(s)
- Nismat Javed
- Department of Internal Medicine, BronxCare Health System, Bronx, NY,
USA
| | - Haider Ghazanfar
- Department of Gastroenterology, BronxCare Health System, Bronx, NY,
USA
| | - Bhavna Balar
- Department of Gastroenterology, BronxCare Health System, Bronx, NY,
USA
| | - Harish Patel
- Department of Gastroenterology, BronxCare Health System, Bronx, NY,
USA
| |
Collapse
|
2
|
Guo F, Meng H. Application of artificial intelligence in gastrointestinal endoscopy. Arab J Gastroenterol 2024; 25:93-96. [PMID: 38228443 DOI: 10.1016/j.ajg.2023.12.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 09/06/2023] [Accepted: 12/30/2023] [Indexed: 01/18/2024]
Abstract
Endoscopy is an important method for diagnosing gastrointestinal (GI) diseases. In this study, we provide an overview of the advances in artificial intelligence (AI) technology in the field of GI endoscopy over recent years, including esophagus, stomach, large intestine, and capsule endoscopy (small intestine). AI-assisted endoscopy shows high accuracy, sensitivity, and specificity in the detection and diagnosis of GI diseases at all levels. Hence, AI will make a breakthrough in the field of GI endoscopy in the near future. However, AI technology currently has some limitations and is still in the preclinical stages.
Collapse
Affiliation(s)
- Fujia Guo
- The first Affiliated Hospital, Dalian Medical University, Dalian 116044, China
| | - Hua Meng
- The first Affiliated Hospital, Dalian Medical University, Dalian 116044, China.
| |
Collapse
|
3
|
Sierra-Jerez F, Martinez F. A non-aligned translation with a neoplastic classifier regularization to include vascular NBI patterns in standard colonoscopies. Comput Biol Med 2024; 170:108008. [PMID: 38277922 DOI: 10.1016/j.compbiomed.2024.108008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 12/21/2023] [Accepted: 01/13/2024] [Indexed: 01/28/2024]
Abstract
Polyp vascular patterns are key to categorizing colorectal cancer malignancy. These patterns are typically observed in situ from specialized narrow-band images (NBI). Nonetheless, such vascular characterization is lost from standard colonoscopies (the primary attention mechanism). Besides, even for NBI observations, the categorization remains biased for expert observations, reporting errors in classification from 59.5% to 84.2%. This work introduces an end-to-end computational strategy to enhance in situ standard colonoscopy observations, including vascular patterns typically observed from NBI mechanisms. These retrieved synthetic images are achieved by adjusting a deep representation under a non-aligned translation task from optical colonoscopy (OC) to NBI. The introduced scheme includes an architecture to discriminate enhanced neoplastic patterns achieving a remarkable separation into the embedding representation. The proposed approach was validated in a public dataset with a total of 76 sequences, including standard optical sequences and the respective NBI observations. The enhanced optical sequences were automatically classified among adenomas and hyperplastic samples achieving an F1-score of 0.86%. To measure the sensibility capability of the proposed approach, serrated samples were projected to the trained architecture. In this experiment, statistical differences from three classes with a ρ-value <0.05 were reported, following a Mann-Whitney U test. This work showed remarkable polyp discrimination results in enhancing OC sequences regarding typical NBI patterns. This method also learns polyp class distributions under the unpaired criteria (close to real practice), with the capability to separate serrated samples from adenomas and hyperplastic ones.
Collapse
Affiliation(s)
- Franklin Sierra-Jerez
- Biomedical Imaging, Vision and Learning Laboratory (BIVL(2)ab), Universidad Industrial de Santander (UIS), Colombia
| | - Fabio Martinez
- Biomedical Imaging, Vision and Learning Laboratory (BIVL(2)ab), Universidad Industrial de Santander (UIS), Colombia.
| |
Collapse
|
4
|
Young E, Edwards L, Singh R. The Role of Artificial Intelligence in Colorectal Cancer Screening: Lesion Detection and Lesion Characterization. Cancers (Basel) 2023; 15:5126. [PMID: 37958301 PMCID: PMC10647850 DOI: 10.3390/cancers15215126] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/14/2023] [Accepted: 10/14/2023] [Indexed: 11/15/2023] Open
Abstract
Colorectal cancer remains a leading cause of cancer-related morbidity and mortality worldwide, despite the widespread uptake of population surveillance strategies. This is in part due to the persistent development of 'interval colorectal cancers', where patients develop colorectal cancer despite appropriate surveillance intervals, implying pre-malignant polyps were not resected at a prior colonoscopy. Multiple techniques have been developed to improve the sensitivity and accuracy of lesion detection and characterisation in an effort to improve the efficacy of colorectal cancer screening, thereby reducing the incidence of interval colorectal cancers. This article presents a comprehensive review of the transformative role of artificial intelligence (AI), which has recently emerged as one such solution for improving the quality of screening and surveillance colonoscopy. Firstly, AI-driven algorithms demonstrate remarkable potential in addressing the challenge of overlooked polyps, particularly polyp subtypes infamous for escaping human detection because of their inconspicuous appearance. Secondly, AI empowers gastroenterologists without exhaustive training in advanced mucosal imaging to characterise polyps with accuracy similar to that of expert interventionalists, reducing the dependence on pathologic evaluation and guiding appropriate resection techniques or referrals for more complex resections. AI in colonoscopy holds the potential to advance the detection and characterisation of polyps, addressing current limitations and improving patient outcomes. The integration of AI technologies into routine colonoscopy represents a promising step towards more effective colorectal cancer screening and prevention.
Collapse
Affiliation(s)
- Edward Young
- Faculty of Health and Medical Sciences, University of Adelaide, Lyell McEwin Hospital, Haydown Rd, Elizabeth Vale, SA 5112, Australia
| | - Louisa Edwards
- Faculty of Health and Medical Sciences, University of Adelaide, Queen Elizabeth Hospital, Port Rd, Woodville South, SA 5011, Australia
| | - Rajvinder Singh
- Faculty of Health and Medical Sciences, University of Adelaide, Lyell McEwin Hospital, Haydown Rd, Elizabeth Vale, SA 5112, Australia
| |
Collapse
|
5
|
Keshtkar K, Reza Safarpour A, Heshmat R, Sotoudehmanesh R, Keshtkar A. A Systematic Review and Meta-analysis of Convolutional Neural Network in the Diagnosis of Colorectal Polyps and Cancer. THE TURKISH JOURNAL OF GASTROENTEROLOGY : THE OFFICIAL JOURNAL OF TURKISH SOCIETY OF GASTROENTEROLOGY 2023; 34:985-997. [PMID: 37681266 PMCID: PMC10645297 DOI: 10.5152/tjg.2023.22491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 03/22/2023] [Indexed: 09/09/2023]
Abstract
Convolutional neural networks are a class of deep neural networks used for different clinical purposes, including improving the detection rate of colorectal lesions. This systematic review and meta-analysis aimed to assess the performance of convolutional neural network-based models in the detection or classification of colorectal polyps and colorectal cancer. A systematic search was performed in MEDLINE, SCOPUS, Web of Science, and other related databases. The performance measures of the convolutional neural network models in the detection of colorectal polyps and colorectal cancer were calculated in the 2 scenarios of the best and worst accuracy. Stata and R software were used for conducting the meta-analysis. From 3368 searched records, 24 primary studies were included. The sensitivity and specificity of convolutional neural network models in predicting colorectal polyps in worst and best scenarios ranged from 84.7% to 91.6% and from 86.0% to 93.8%, respectively. These values in predicting colorectal cancer varied between 93.2% and 94.1% and between 94.6% and 97.7%. The positive and negative likelihood ratios varied between 6.2 and 14.5 and 0.09 and 0.17 in these scenarios, respectively, in predicting colorectal polyps, and 17.1-41.2 and 0.07-0.06 in predicting colorectal polyps. The diagnostic odds ratio and accuracy measures of convolutional neural network models in predicting colorectal polyps in worst and best scenarios ranged between 36% and 162% and between 80.5% and 88.6%, respectively. These values in predicting colorectal cancer in the worst and the best scenarios varied between 239.63% and 677.47% and between 88.2% and 96.4%. The area under the receiver operating characteristic varied between 0.92 and 0.97 in the worst and the best scenarios in colorectal polyps, respectively, and between 0.98 and 0.99 in colorectal polyps prediction. Convolutional neural network-based models showed an acceptable accuracy in detecting colorectal polyps and colorectal cancer.
Collapse
Affiliation(s)
- Kamyab Keshtkar
- University of Tehran School of Electrical and Computer Engineering, Tehran, Iran
| | - Ali Reza Safarpour
- Gastroenterohepatology Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Ramin Heshmat
- Chronic Diseases Research Center, Endocrinology and Metabolism Population Sciences Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Rasoul Sotoudehmanesh
- Department of Gastroenterology, Digestive Disease Research Center, Digestive Disease Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Abbas Keshtkar
- Department of Health Sciences Education Development, Tehran University of Medical Sciences School of Public Health, Tehran, Iran
| |
Collapse
|
6
|
van Bokhorst QNE, Houwen BBSL, Hazewinkel Y, Fockens P, Dekker E. Advances in artificial intelligence and computer science for computer-aided diagnosis of colorectal polyps: current status. Endosc Int Open 2023; 11:E752-E767. [PMID: 37593158 PMCID: PMC10431975 DOI: 10.1055/a-2098-1999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 05/08/2023] [Indexed: 08/19/2023] Open
Affiliation(s)
- Querijn N E van Bokhorst
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, location Academic Medical Center, Amsterdam, the Netherlands
- Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam, the Netherlands
| | - Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, location Academic Medical Center, Amsterdam, the Netherlands
- Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Tergooi Medical Center, Hilversum, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, location Academic Medical Center, Amsterdam, the Netherlands
- Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, location Academic Medical Center, Amsterdam, the Netherlands
- Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam, the Netherlands
| |
Collapse
|
7
|
Chung J, Oh DJ, Park J, Kim SH, Lim YJ. Automatic Classification of GI Organs in Wireless Capsule Endoscopy Using a No-Code Platform-Based Deep Learning Model. Diagnostics (Basel) 2023; 13:diagnostics13081389. [PMID: 37189489 DOI: 10.3390/diagnostics13081389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 04/03/2023] [Accepted: 04/10/2023] [Indexed: 05/17/2023] Open
Abstract
The first step in reading a capsule endoscopy (CE) is determining the gastrointestinal (GI) organ. Because CE produces too many inappropriate and repetitive images, automatic organ classification cannot be directly applied to CE videos. In this study, we developed a deep learning algorithm to classify GI organs (the esophagus, stomach, small bowel, and colon) using a no-code platform, applied it to CE videos, and proposed a novel method to visualize the transitional area of each GI organ. We used training data (37,307 images from 24 CE videos) and test data (39,781 images from 30 CE videos) for model development. This model was validated using 100 CE videos that included "normal", "blood", "inflamed", "vascular", and "polypoid" lesions. Our model achieved an overall accuracy of 0.98, precision of 0.89, recall of 0.97, and F1 score of 0.92. When we validated this model relative to the 100 CE videos, it produced average accuracies for the esophagus, stomach, small bowel, and colon of 0.98, 0.96, 0.87, and 0.87, respectively. Increasing the AI score's cut-off improved most performance metrics in each organ (p < 0.05). To locate a transitional area, we visualized the predicted results over time, and setting the cut-off of the AI score to 99.9% resulted in a better intuitive presentation than the baseline. In conclusion, the GI organ classification AI model demonstrated high accuracy on CE videos. The transitional area could be more easily located by adjusting the cut-off of the AI score and visualization of its result over time.
Collapse
Affiliation(s)
- Joowon Chung
- Department of Internal Medicine, Nowon Eulji Medical Center, Eulji University School of Medicine, Seoul 01830, Republic of Korea
| | - Dong Jun Oh
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea
| | - Junseok Park
- Department of Internal Medicine, Digestive Disease Center, Institute for Digestive Research, Soonchunhyang University College of Medicine, Seoul 04401, Republic of Korea
| | - Su Hwan Kim
- Department of Internal Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea
| |
Collapse
|
8
|
González-Bueno Puyal J, Brandao P, Ahmad OF, Bhatia KK, Toth D, Kader R, Lovat L, Mountney P, Stoyanov D. Spatio-temporal classification for polyp diagnosis. BIOMEDICAL OPTICS EXPRESS 2023; 14:593-607. [PMID: 36874484 PMCID: PMC9979670 DOI: 10.1364/boe.473446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 11/25/2022] [Accepted: 12/06/2022] [Indexed: 06/18/2023]
Abstract
Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-cancerous polyps. Computer-aided polyp characterisation can determine which polyps need polypectomy and recent deep learning-based approaches have shown promising results as clinical decision support tools. Yet polyp appearance during a procedure can vary, making automatic predictions unstable. In this paper, we investigate the use of spatio-temporal information to improve the performance of lesions classification as adenoma or non-adenoma. Two methods are implemented showing an increase in performance and robustness during extensive experiments both on internal and openly available benchmark datasets.
Collapse
Affiliation(s)
- Juana González-Bueno Puyal
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
- Odin Vision, London W1W 7TY, UK
| | | | - Omer F. Ahmad
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | | | | | - Rawen Kader
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | - Laurence Lovat
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| |
Collapse
|
9
|
Ramzan M, Raza M, Sharif MI, Kadry S. Gastrointestinal Tract Polyp Anomaly Segmentation on Colonoscopy Images Using Graft-U-Net. J Pers Med 2022; 12:jpm12091459. [PMID: 36143244 PMCID: PMC9503374 DOI: 10.3390/jpm12091459] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 08/28/2022] [Accepted: 09/01/2022] [Indexed: 11/21/2022] Open
Abstract
Computer-aided polyp segmentation is a crucial task that supports gastroenterologists in examining and resecting anomalous tissue in the gastrointestinal tract. The disease polyps grow mainly in the colorectal area of the gastrointestinal tract and in the mucous membrane, which has protrusions of micro-abnormal tissue that increase the risk of incurable diseases such as cancer. So, the early examination of polyps can decrease the chance of the polyps growing into cancer, such as adenomas, which can change into cancer. Deep learning-based diagnostic systems play a vital role in diagnosing diseases in the early stages. A deep learning method, Graft-U-Net, is proposed to segment polyps using colonoscopy frames. Graft-U-Net is a modified version of UNet, which comprises three stages, including the preprocessing, encoder, and decoder stages. The preprocessing technique is used to improve the contrast of the colonoscopy frames. Graft-U-Net comprises encoder and decoder blocks where the encoder analyzes features, while the decoder performs the features’ synthesizing processes. The Graft-U-Net model offers better segmentation results than existing deep learning models. The experiments were conducted using two open-access datasets, Kvasir-SEG and CVC-ClinicDB. The datasets were prepared from the large bowel of the gastrointestinal tract by performing a colonoscopy procedure. The anticipated model outperforms in terms of its mean Dice of 96.61% and mean Intersection over Union (mIoU) of 82.45% with the Kvasir-SEG dataset. Similarly, with the CVC-ClinicDB dataset, the method achieved a mean Dice of 89.95% and an mIoU of 81.38%.
Collapse
Affiliation(s)
- Muhammad Ramzan
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
- Correspondence:
| | - Muhammad Imran Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 999095, Lebanon
| |
Collapse
|
10
|
Gong EJ, Bang CS, Lee JJ, Yang YJ, Baik GH. Impact of the Volume and Distribution of Training Datasets in the Development of Deep-Learning Models for the Diagnosis of Colorectal Polyps in Endoscopy Images. J Pers Med 2022; 12:jpm12091361. [PMID: 36143146 PMCID: PMC9505038 DOI: 10.3390/jpm12091361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/13/2022] [Accepted: 08/19/2022] [Indexed: 11/16/2022] Open
Abstract
Background: Establishment of an artificial intelligence model in gastrointestinal endoscopy has no standardized dataset. The optimal volume or class distribution of training datasets has not been evaluated. An artificial intelligence model was previously created by the authors to classify endoscopic images of colorectal polyps into four categories, including advanced colorectal cancer, early cancers/high-grade dysplasia, tubular adenoma, and nonneoplasm. The aim of this study was to evaluate the impact of the volume and distribution of training dataset classes in the development of deep-learning models for colorectal polyp histopathology prediction from endoscopic images. Methods: The same 3828 endoscopic images that were used to create earlier models were used. An additional 6838 images were used to find the optimal volume and class distribution for a deep-learning model. Various amounts of data volume and class distributions were tried to establish deep-learning models. The training of deep-learning models uniformly used no-code platform Neuro-T. Accuracy was the primary outcome on four-class prediction. Results: The highest internal-test classification accuracy in the original dataset, doubled dataset, and tripled dataset was commonly shown by doubling the proportion of data for fewer categories (2:2:1:1 for advanced colorectal cancer: early cancers/high-grade dysplasia: tubular adenoma: non-neoplasm). Doubling the proportion of data for fewer categories in the original dataset showed the highest accuracy (86.4%, 95% confidence interval: 85.0–97.8%) compared to that of the doubled or tripled dataset. The total required number of images in this performance was only 2418 images. Gradient-weighted class activation mapping confirmed that the part that the deep-learning model pays attention to coincides with the part that the endoscopist pays attention to. Conclusion: As a result of a data-volume-dependent performance plateau in the classification model of colonoscopy, a dataset that has been doubled or tripled is not always beneficial to training. Deep-learning models would be more accurate if the proportion of fewer category lesions was increased.
Collapse
Affiliation(s)
- Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea
- Correspondence: ; Tel.: +82-33-240-5821; Fax: +82-33-241-8064
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Young Joo Yang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| |
Collapse
|
11
|
Automated histological classification for digital pathology images of colonoscopy specimen via deep learning. Sci Rep 2022; 12:12804. [PMID: 35896791 PMCID: PMC9329279 DOI: 10.1038/s41598-022-16885-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 07/18/2022] [Indexed: 11/09/2022] Open
Abstract
Colonoscopy is an effective tool to detect colorectal lesions and needs the support of pathological diagnosis. This study aimed to develop and validate deep learning models that automatically classify digital pathology images of colon lesions obtained from colonoscopy-related specimen. Histopathological slides of colonoscopic biopsy or resection specimens were collected and grouped into six classes by disease category: adenocarcinoma, tubular adenoma (TA), traditional serrated adenoma (TSA), sessile serrated adenoma (SSA), hyperplastic polyp (HP), and non-specific lesions. Digital photographs were taken of each pathological slide to fine-tune two pre-trained convolutional neural networks, and the model performances were evaluated. A total of 1865 images were included from 703 patients, of which 10% were used as a test dataset. For six-class classification, the mean diagnostic accuracy was 97.3% (95% confidence interval [CI], 96.0–98.6%) by DenseNet-161 and 95.9% (95% CI 94.1–97.7%) by EfficientNet-B7. The per-class area under the receiver operating characteristic curve (AUC) was highest for adenocarcinoma (1.000; 95% CI 0.999–1.000) by DenseNet-161 and TSA (1.000; 95% CI 1.000–1.000) by EfficientNet-B7. The lowest per-class AUCs were still excellent: 0.991 (95% CI 0.983–0.999) for HP by DenseNet-161 and 0.995 for SSA (95% CI 0.992–0.998) by EfficientNet-B7. Deep learning models achieved excellent performances for discriminating adenocarcinoma from non-adenocarcinoma lesions with an AUC of 0.995 or 0.998. The pathognomonic area for each class was appropriately highlighted in digital images by saliency map, particularly focusing epithelial lesions. Deep learning models might be a useful tool to help the diagnosis for pathologic slides of colonoscopy-related specimens.
Collapse
|
12
|
No-Code Platform-Based Deep-Learning Models for Prediction of Colorectal Polyp Histology from White-Light Endoscopy Images: Development and Performance Verification. J Pers Med 2022; 12:jpm12060963. [PMID: 35743748 PMCID: PMC9225479 DOI: 10.3390/jpm12060963] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 05/27/2022] [Accepted: 06/10/2022] [Indexed: 12/17/2022] Open
Abstract
Background: The authors previously developed deep-learning models for the prediction of colorectal polyp histology (advanced colorectal cancer, early cancer/high-grade dysplasia, tubular adenoma with or without low-grade dysplasia, or non-neoplasm) from endoscopic images. While the model achieved 67.3% internal-test accuracy and 79.2% external-test accuracy, model development was labour-intensive and required specialised programming expertise. Moreover, the 240-image external-test dataset included only three advanced and eight early cancers, so it was difficult to generalise model performance. These limitations may be mitigated by deep-learning models developed using no-code platforms. Objective: To establish no-code platform-based deep-learning models for the prediction of colorectal polyp histology from white-light endoscopy images and compare their diagnostic performance with traditional models. Methods: The same 3828 endoscopic images used to establish previous models were used to establish new models based on no-code platforms Neuro-T, VLAD, and Create ML-Image Classifier. A prospective multicentre validation study was then conducted using 3818 novel images. The primary outcome was the accuracy of four-category prediction. Results: The model established using Neuro-T achieved the highest internal-test accuracy (75.3%, 95% confidence interval: 71.0–79.6%) and external-test accuracy (80.2%, 76.9–83.5%) but required the longest training time. In contrast, the model established using Create ML-Image Classifier required only 3 min for training and still achieved 72.7% (70.8–74.6%) external-test accuracy. Attention map analysis revealed that the imaging features used by the no-code deep-learning models were similar to those used by endoscopists during visual inspection. Conclusion: No-code deep-learning tools allow for the rapid development of models with high accuracy for predicting colorectal polyp histology.
Collapse
|
13
|
Abstract
Artificial intelligence (AI) is rapidly developing in various medical fields, and there is an increase in research performed in the field of gastrointestinal (GI) endoscopy. In particular, the advent of convolutional neural network, which is a class of deep learning method, has the potential to revolutionize the field of GI endoscopy, including esophagogastroduodenoscopy (EGD), capsule endoscopy (CE), and colonoscopy. A total of 149 original articles pertaining to AI (27 articles in esophagus, 30 articles in stomach, 29 articles in CE, and 63 articles in colon) were identified in this review. The main focuses of AI in EGD are cancer detection, identifying the depth of cancer invasion, prediction of pathological diagnosis, and prediction of Helicobacter pylori infection. In the field of CE, automated detection of bleeding sites, ulcers, tumors, and various small bowel diseases is being investigated. AI in colonoscopy has advanced with several patient-based prospective studies being conducted on the automated detection and classification of colon polyps. Furthermore, research on inflammatory bowel disease has also been recently reported. Most studies of AI in the field of GI endoscopy are still in the preclinical stages because of the retrospective design using still images. Video-based prospective studies are needed to advance the field. However, AI will continue to develop and be used in daily clinical practice in the near future. In this review, we have highlighted the published literature along with providing current status and insights into the future of AI in GI endoscopy.
Collapse
Affiliation(s)
- Yutaka Okagawa
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.,Department of Gastroenterology, Tonan Hospital, Sapporo, Japan
| | - Seiichiro Abe
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.
| | - Masayoshi Yamada
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Ichiro Oda
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| |
Collapse
|
14
|
Yang H, Hu B. Early gastrointestinal cancer: The application of artificial intelligence. Artif Intell Gastrointest Endosc 2021; 2:185-197. [DOI: 10.37126/aige.v2.i4.185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 06/25/2021] [Accepted: 08/18/2021] [Indexed: 02/06/2023] Open
Abstract
Early gastrointestinal (GI) cancer has been the core of clinical endoscopic work. Its early detection and treatment are tightly associated with patients’ prognoses. As a novel technology, artificial intelligence has been improved and applied in the field of endoscopy. Studies on detection, diagnosis, risk, and prognosis evaluation of diseases in the GI tract have been in development, including precancerous lesions, adenoma, early GI cancers, and advanced GI cancers. In this review, research on esophagus, stomach, and colon was concluded, and associated with the process from precancerous lesions to early GI cancer, such as from Barrett’s esophagus to early esophageal cancer, from dysplasia to early gastric cancer, and from adenoma to early colonic cancer. A status quo of research on early GI cancers and artificial intelligence was provided.
Collapse
Affiliation(s)
- Hang Yang
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Bing Hu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| |
Collapse
|
15
|
Correia FP, Lourenço LC. Artificial intelligence application in diagnostic gastrointestinal endoscopy - Deus ex machina? World J Gastroenterol 2021; 27:5351-5361. [PMID: 34539137 PMCID: PMC8409168 DOI: 10.3748/wjg.v27.i32.5351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 05/15/2021] [Accepted: 07/19/2021] [Indexed: 02/06/2023] Open
Abstract
The close relationship of medicine with technology and the particular interest in this symbiosis in recent years has led to the development of several computed artificial intelligence (AI) systems aimed at various areas of medicine. A number of studies have demonstrated that those systems allow accurate diagnoses with histological precision, thus facilitating decision-making by clinicians in real time. In the field of gastroenterology, AI has been applied in the diagnosis of pathologies of the entire digestive tract and their attached glands, and are increasingly accepted for the detection of colorectal polyps and confirming their histological classification. Studies have shown high accuracy, sensitivity, and specificity in relation to expert endoscopists, and mainly in relation to those with less experience. Other applications that are increasingly studied and with very promising results are the investigation of dysplasia in patients with Barrett's esophagus and the endoscopic and histological assessment of colon inflammation in patients with ulcerative colitis. In some cases AI is thus better than or at least equal to human abilities. However, additional studies are needed to reinforce the existing data, and mainly to determine the applicability of this technology in other indications. This review summarizes the state of the art of AI in gastroenterological pathology.
Collapse
Affiliation(s)
- Fábio Pereira Correia
- Department of Gastroenterology, Hospital Prof. Dr Fernando Fonseca, Lisbon 2720-276, Portugal
| | - Luís Carvalho Lourenço
- Department of Gastroenterology, Hospital Prof. Dr Fernando Fonseca, Lisbon 2720-276, Portugal
| |
Collapse
|
16
|
Joseph J, LePage EM, Cheney CP, Pawa R. Artificial intelligence in colonoscopy. World J Gastroenterol 2021; 27:4802-4817. [PMID: 34447227 PMCID: PMC8371500 DOI: 10.3748/wjg.v27.i29.4802] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 05/12/2021] [Accepted: 07/16/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer remains a leading cause of morbidity and mortality in the United States. Advances in artificial intelligence (AI), specifically computer aided detection and computer-aided diagnosis offer promising methods of increasing adenoma detection rates with the goal of removing more pre-cancerous polyps. Conversely, these methods also may allow for smaller non-cancerous lesions to be diagnosed in vivo and left in place, decreasing the risks that come with unnecessary polypectomies. This review will provide an overview of current advances in the use of AI in colonoscopy to aid in polyp detection and characterization as well as areas of developing research.
Collapse
Affiliation(s)
- Joel Joseph
- Department of Internal Medicine, Wake Forest Baptist Medical Center, Winston Salem, NC 27157, United States
| | - Ella Marie LePage
- Department of Internal Medicine, Wake Forest Baptist Medical Center, Winston Salem, NC 27157, United States
| | - Catherine Phillips Cheney
- Department of Internal Medicine, Wake Forest School of Medicine, Winston Salem, NC 27157, United States
| | - Rishi Pawa
- Department of Internal Medicine, Section of Gastroenterology and Hepatology, Wake Forest Baptist Medical Center, Winston-Salem, NC 27157, United States
| |
Collapse
|
17
|
Nazarian S, Glover B, Ashrafian H, Darzi A, Teare J. Diagnostic Accuracy of Artificial Intelligence and Computer-Aided Diagnosis for the Detection and Characterization of Colorectal Polyps: Systematic Review and Meta-analysis. J Med Internet Res 2021; 23:e27370. [PMID: 34259645 PMCID: PMC8319784 DOI: 10.2196/27370] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 03/09/2021] [Accepted: 05/06/2021] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Colonoscopy reduces the incidence of colorectal cancer (CRC) by allowing detection and resection of neoplastic polyps. Evidence shows that many small polyps are missed on a single colonoscopy. There has been a successful adoption of artificial intelligence (AI) technologies to tackle the issues around missed polyps and as tools to increase the adenoma detection rate (ADR). OBJECTIVE The aim of this review was to examine the diagnostic accuracy of AI-based technologies in assessing colorectal polyps. METHODS A comprehensive literature search was undertaken using the databases of Embase, MEDLINE, and the Cochrane Library. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines were followed. Studies reporting the use of computer-aided diagnosis for polyp detection or characterization during colonoscopy were included. Independent proportions and their differences were calculated and pooled through DerSimonian and Laird random-effects modeling. RESULTS A total of 48 studies were included. The meta-analysis showed a significant increase in pooled polyp detection rate in patients with the use of AI for polyp detection during colonoscopy compared with patients who had standard colonoscopy (odds ratio [OR] 1.75, 95% CI 1.56-1.96; P<.001). When comparing patients undergoing colonoscopy with the use of AI to those without, there was also a significant increase in ADR (OR 1.53, 95% CI 1.32-1.77; P<.001). CONCLUSIONS With the aid of machine learning, there is potential to improve ADR and, consequently, reduce the incidence of CRC. The current generation of AI-based systems demonstrate impressive accuracy for the detection and characterization of colorectal polyps. However, this is an evolving field and before its adoption into a clinical setting, AI systems must prove worthy to patients and clinicians. TRIAL REGISTRATION PROSPERO International Prospective Register of Systematic Reviews CRD42020169786; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42020169786.
Collapse
Affiliation(s)
- Scarlet Nazarian
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Ben Glover
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Hutan Ashrafian
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Ara Darzi
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Julian Teare
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| |
Collapse
|
18
|
Shao Y, Zhang YX, Chen HH, Lu SS, Zhang SC, Zhang JX. Advances in the application of artificial intelligence in solid tumor imaging. Artif Intell Cancer 2021; 2:12-24. [DOI: 10.35713/aic.v2.i2.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 04/02/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Early diagnosis and timely treatment are crucial in reducing cancer-related mortality. Artificial intelligence (AI) has greatly relieved clinical workloads and changed the current medical workflows. We searched for recent studies, reports and reviews referring to AI and solid tumors; many reviews have summarized AI applications in the diagnosis and treatment of a single tumor type. We herein systematically review the advances of AI application in multiple solid tumors including esophagus, stomach, intestine, breast, thyroid, prostate, lung, liver, cervix, pancreas and kidney with a specific focus on the continual improvement on model performance in imaging practice.
Collapse
Affiliation(s)
- Ying Shao
- Department of Laboratory Medicine, People Hospital of Jiangying, Jiangying 214400, Jiangsu Province, China
| | - Yu-Xuan Zhang
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Huan-Huan Chen
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Shan-Shan Lu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Shi-Chang Zhang
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Jie-Xin Zhang
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| |
Collapse
|
19
|
Jones MA, MacCuaig WM, Frickenstein AN, Camalan S, Gurcan MN, Holter-Chakrabarty J, Morris KT, McNally MW, Booth KK, Carter S, Grizzle WE, McNally LR. Molecular Imaging of Inflammatory Disease. Biomedicines 2021; 9:152. [PMID: 33557374 PMCID: PMC7914540 DOI: 10.3390/biomedicines9020152] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Revised: 01/25/2021] [Accepted: 01/31/2021] [Indexed: 02/06/2023] Open
Abstract
Inflammatory diseases include a wide variety of highly prevalent conditions with high mortality rates in severe cases ranging from cardiovascular disease, to rheumatoid arthritis, to chronic obstructive pulmonary disease, to graft vs. host disease, to a number of gastrointestinal disorders. Many diseases that are not considered inflammatory per se are associated with varying levels of inflammation. Imaging of the immune system and inflammatory response is of interest as it can give insight into disease progression and severity. Clinical imaging technologies such as computed tomography (CT) and magnetic resonance imaging (MRI) are traditionally limited to the visualization of anatomical information; then, the presence or absence of an inflammatory state must be inferred from the structural abnormalities. Improvement in available contrast agents has made it possible to obtain functional information as well as anatomical. In vivo imaging of inflammation ultimately facilitates an improved accuracy of diagnostics and monitoring of patients to allow for better patient care. Highly specific molecular imaging of inflammatory biomarkers allows for earlier diagnosis to prevent irreversible damage. Advancements in imaging instruments, targeted tracers, and contrast agents represent a rapidly growing area of preclinical research with the hopes of quick translation to the clinic.
Collapse
Affiliation(s)
- Meredith A. Jones
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA; (M.A.J.); (W.M.M.); (A.N.F.)
- Stephenson Cancer Center, University of Oklahoma, Oklahoma City, OK 73104, USA; (J.H.-C.); (K.T.M.); (M.W.M.); (K.K.B.); (S.C.)
| | - William M. MacCuaig
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA; (M.A.J.); (W.M.M.); (A.N.F.)
- Stephenson Cancer Center, University of Oklahoma, Oklahoma City, OK 73104, USA; (J.H.-C.); (K.T.M.); (M.W.M.); (K.K.B.); (S.C.)
| | - Alex N. Frickenstein
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA; (M.A.J.); (W.M.M.); (A.N.F.)
- Stephenson Cancer Center, University of Oklahoma, Oklahoma City, OK 73104, USA; (J.H.-C.); (K.T.M.); (M.W.M.); (K.K.B.); (S.C.)
| | - Seda Camalan
- Department of Internal Medicine, Wake Forest Baptist Health, Winston-Salem, NC 27157, USA; (S.C.); (M.N.G.)
| | - Metin N. Gurcan
- Department of Internal Medicine, Wake Forest Baptist Health, Winston-Salem, NC 27157, USA; (S.C.); (M.N.G.)
| | - Jennifer Holter-Chakrabarty
- Stephenson Cancer Center, University of Oklahoma, Oklahoma City, OK 73104, USA; (J.H.-C.); (K.T.M.); (M.W.M.); (K.K.B.); (S.C.)
- Department of Medicine, University of Oklahoma, Oklahoma City, OK 73104, USA
| | - Katherine T. Morris
- Stephenson Cancer Center, University of Oklahoma, Oklahoma City, OK 73104, USA; (J.H.-C.); (K.T.M.); (M.W.M.); (K.K.B.); (S.C.)
- Department of Surgery, University of Oklahoma, Oklahoma City, OK 73104, USA
| | - Molly W. McNally
- Stephenson Cancer Center, University of Oklahoma, Oklahoma City, OK 73104, USA; (J.H.-C.); (K.T.M.); (M.W.M.); (K.K.B.); (S.C.)
| | - Kristina K. Booth
- Stephenson Cancer Center, University of Oklahoma, Oklahoma City, OK 73104, USA; (J.H.-C.); (K.T.M.); (M.W.M.); (K.K.B.); (S.C.)
- Department of Surgery, University of Oklahoma, Oklahoma City, OK 73104, USA
| | - Steven Carter
- Stephenson Cancer Center, University of Oklahoma, Oklahoma City, OK 73104, USA; (J.H.-C.); (K.T.M.); (M.W.M.); (K.K.B.); (S.C.)
- Department of Surgery, University of Oklahoma, Oklahoma City, OK 73104, USA
| | - William E. Grizzle
- Department of Pathology, University of Alabama at Birmingham, Birmingham, AL 35294, USA;
| | - Lacey R. McNally
- Stephenson Cancer Center, University of Oklahoma, Oklahoma City, OK 73104, USA; (J.H.-C.); (K.T.M.); (M.W.M.); (K.K.B.); (S.C.)
- Department of Surgery, University of Oklahoma, Oklahoma City, OK 73104, USA
| |
Collapse
|
20
|
Automated Classification and Segmentation in Colorectal Images Based on Self-Paced Transfer Network. BIOMED RESEARCH INTERNATIONAL 2021; 2021:6683931. [PMID: 33542924 PMCID: PMC7843175 DOI: 10.1155/2021/6683931] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 12/22/2020] [Accepted: 01/06/2021] [Indexed: 02/08/2023]
Abstract
Colorectal imaging improves on diagnosis of colorectal diseases by providing colorectal images. Manual diagnosis of colorectal disease is labor-intensive and time-consuming. In this paper, we present a method for automatic colorectal disease classification and segmentation. Because of label unbalanced and difficult colorectal data, the classification based on self-paced transfer VGG network (STVGG) is proposed. ImageNet pretraining network parameters are transferred to VGG network with training colorectal data to acquire good initial network performance. And self-paced learning is used to optimize the network so that the classification performance of label unbalanced and difficult samples is improved. In order to assist the colonoscopist to accurately determine whether the polyp needs surgical resection, feature of trained STVGG model is shared to Unet segmentation network as the encoder part and to avoid repeat learning of polyp segmentation model. The experimental results on 3061 colorectal images illustrated that the proposed method obtained higher classification accuracy (96%) and segmentation performance compared with a few other methods. The polyp can be segmented accurately from around tissues by the proposed method. The segmentation results underpin the potential of deep learning methods for assisting colonoscopist in identifying polyps and enabling timely resection of these polyps at an early stage.
Collapse
|
21
|
Wittenberg T, Raithel M. Artificial Intelligence-Based Polyp Detection in Colonoscopy: Where Have We Been, Where Do We Stand, and Where Are We Headed? Visc Med 2020; 36:428-438. [PMID: 33447598 PMCID: PMC7768101 DOI: 10.1159/000512438] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Accepted: 10/20/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND In the past, image-based computer-assisted diagnosis and detection systems have been driven mainly from the field of radiology, and more specifically mammography. Nevertheless, with the availability of large image data collections (known as the "Big Data" phenomenon) in correlation with developments from the domain of artificial intelligence (AI) and particularly so-called deep convolutional neural networks, computer-assisted detection of adenomas and polyps in real-time during screening colonoscopy has become feasible. SUMMARY With respect to these developments, the scope of this contribution is to provide a brief overview about the evolution of AI-based detection of adenomas and polyps during colonoscopy of the past 35 years, starting with the age of "handcrafted geometrical features" together with simple classification schemes, over the development and use of "texture-based features" and machine learning approaches, and ending with current developments in the field of deep learning using convolutional neural networks. In parallel, the need and necessity of large-scale clinical data will be discussed in order to develop such methods, up to commercially available AI products for automated detection of polyps (adenoma and benign neoplastic lesions). Finally, a short view into the future is made regarding further possibilities of AI methods within colonoscopy. KEY MESSAGES Research of image-based lesion detection in colonoscopy data has a 35-year-old history. Milestones such as the Paris nomenclature, texture features, big data, and deep learning were essential for the development and availability of commercial AI-based systems for polyp detection.
Collapse
|
22
|
Marlicz W, Koulaouzidis G, Koulaouzidis A. Artificial Intelligence in Gastroenterology-Walking into the Room of Little Miracles. J Clin Med 2020; 9:jcm9113675. [PMID: 33207649 PMCID: PMC7697458 DOI: 10.3390/jcm9113675] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 11/11/2020] [Indexed: 12/15/2022] Open
Affiliation(s)
- Wojciech Marlicz
- Department of Gastroenterology, Pomeranian Medical University, 71-252 Szczecin, Poland
- The Centre for Digestive Diseases Endoklinika, 70-535 Szczecin, Poland;
- Correspondence:
| | | | | |
Collapse
|
23
|
Suh YJ, Jung J, Cho BJ. Automated Breast Cancer Detection in Digital Mammograms of Various Densities via Deep Learning. J Pers Med 2020; 10:jpm10040211. [PMID: 33172076 PMCID: PMC7711783 DOI: 10.3390/jpm10040211] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 11/03/2020] [Accepted: 11/04/2020] [Indexed: 01/11/2023] Open
Abstract
Mammography plays an important role in screening breast cancer among females, and artificial intelligence has enabled the automated detection of diseases on medical images. This study aimed to develop a deep learning model detecting breast cancer in digital mammograms of various densities and to evaluate the model performance compared to previous studies. From 1501 subjects who underwent digital mammography between February 2007 and May 2015, craniocaudal and mediolateral view mammograms were included and concatenated for each breast, ultimately producing 3002 merged images. Two convolutional neural networks were trained to detect any malignant lesion on the merged images. The performances were tested using 301 merged images from 284 subjects and compared to a meta-analysis including 12 previous deep learning studies. The mean area under the receiver-operating characteristic curve (AUC) for detecting breast cancer in each merged mammogram was 0.952 ± 0.005 by DenseNet-169 and 0.954 ± 0.020 by EfficientNet-B5, respectively. The performance for malignancy detection decreased as breast density increased (density A, mean AUC = 0.984 vs. density D, mean AUC = 0.902 by DenseNet-169). When patients’ age was used as a covariate for malignancy detection, the performance showed little change (mean AUC, 0.953 ± 0.005). The mean sensitivity and specificity of the DenseNet-169 (87 and 88%, respectively) surpassed the mean values (81 and 82%, respectively) obtained in a meta-analysis. Deep learning would work efficiently in screening breast cancer in digital mammograms of various densities, which could be maximized in breasts with lower parenchyma density.
Collapse
Affiliation(s)
- Yong Joon Suh
- Department of Breast and Endocrine Surgery, Hallym University Sacred Heart Hospital, Anyang 14068, Korea;
| | - Jaewon Jung
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang 14068, Korea;
| | - Bum-Joo Cho
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang 14068, Korea;
- Department of Ophthalmology, Hallym University Sacred Heart Hospital, Anyang 14068, Korea
- Correspondence: ; Tel.: +82-31-380-3835; Fax: +82-31-380-3837
| |
Collapse
|
24
|
Scope of Artificial Intelligence in Screening and Diagnosis of Colorectal Cancer. J Clin Med 2020; 9:jcm9103313. [PMID: 33076511 PMCID: PMC7602532 DOI: 10.3390/jcm9103313] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 10/09/2020] [Accepted: 10/12/2020] [Indexed: 12/15/2022] Open
Abstract
Globally, colorectal cancer is the third most diagnosed malignancy. It causes significant mortality and morbidity, which can be reduced by early diagnosis with an effective screening test. Integrating artificial intelligence (AI) and computer-aided detection (CAD) with screening methods has shown promising colorectal cancer screening results. AI could provide a “second look” for endoscopists to decrease the rate of missed polyps during a colonoscopy. It can also improve detection and characterization of polyps by integration with colonoscopy and various advanced endoscopic modalities such as magnifying narrow-band imaging, endocytoscopy, confocal endomicroscopy, laser-induced fluorescence spectroscopy, and magnifying chromoendoscopy. This descriptive review discusses various AI and CAD applications in colorectal cancer screening, polyp detection, and characterization.
Collapse
|