201
|
Young E, Philpott H, Singh R. Endoscopic diagnosis and treatment of gastric dysplasia and early cancer: Current evidence and what the future may hold. World J Gastroenterol 2021; 27:5126-5151. [PMID: 34497440 PMCID: PMC8384753 DOI: 10.3748/wjg.v27.i31.5126] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 07/07/2021] [Accepted: 08/05/2021] [Indexed: 02/06/2023] Open
Abstract
Gastric cancer accounts for a significant proportion of worldwide cancer-related morbidity and mortality. The well documented precancerous cascade provides an opportunity for clinicians to detect and treat gastric cancers at an endoscopically curable stage. In high prevalence regions such as Japan and Korea, this has led to the implementation of population screening programs. However, guidelines remain ambiguous in lower prevalence regions. In recent years, there have been many advances in the endoscopic diagnosis and treatment of early gastric cancer and precancerous lesions. More advanced endoscopic imaging has led to improved detection and characterization of gastric lesions as well as superior accuracy for delineation of margins prior to resection. In addition, promising early data on artificial intelligence in gastroscopy suggests a future role for this technology in maximizing the yield of advanced endoscopic imaging. Data on endoscopic resection (ER) are particularly robust in Japan and Korea, with high rates of curative ER and markedly reduced procedural morbidity. However, there is a shortage of data in other regions to support the applicability of protocols from these high prevalence countries. Future advances in endoscopic therapeutics will likely lead to further expansion of the current indications for ER, as both technology and proceduralist expertise continue to grow.
Collapse
Affiliation(s)
- Edward Young
- Department of Gastroenterology, Lyell McEwin Hospital, Elizabeth Vale 5112, SA, Australia
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide 5000, SA, Australia
| | - Hamish Philpott
- Department of Gastroenterology, Lyell McEwin Hospital, Elizabeth Vale 5112, SA, Australia
| | - Rajvinder Singh
- Department of Gastroenterology, Lyell McEwin Hospital, Elizabeth Vale 5112, SA, Australia
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide 5000, SA, Australia
| |
Collapse
|
202
|
Weng CY, Xu JL, Sun SP, Wang KJ, Lv B. Helicobacter pylori eradication: Exploring its impacts on the gastric mucosa. World J Gastroenterol 2021; 27:5152-5170. [PMID: 34497441 PMCID: PMC8384747 DOI: 10.3748/wjg.v27.i31.5152] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 06/14/2021] [Accepted: 07/15/2021] [Indexed: 02/06/2023] Open
Abstract
Helicobacter pylori (H. pylori) infects approximately 50% of all humans globally. Persistent H. pylori infection causes multiple gastric and extragastric diseases, indicating the importance of early diagnosis and timely treatment. H. pylori eradication produces dramatic changes in the gastric mucosa, resulting in restored function. Consequently, to better understand the importance of H. pylori eradication and clarify the subsequent recovery of gastric mucosal functions after eradication, we summarize histological, endoscopic, and gastric microbiota changes to assess the therapeutic effects on the gastric mucosa.
Collapse
Affiliation(s)
- Chun-Yan Weng
- Department of Gastroenterology, The First Clinical Medical College of Zhejiang Chinese Medical University, Hangzhou 310053, Zhejiang Province, China
| | - Jing-Li Xu
- Department of Gastrointestinal Surgery, The First Clinical Medical College of Zhejiang Chinese Medical University, Hangzhou 310053, Zhejiang Province, China
| | - Shao-Peng Sun
- Department of Gastroenterology, The First Clinical Medical College of Zhejiang Chinese Medical University, Hangzhou 310053, Zhejiang Province, China
| | - Kai-Jie Wang
- Department of Gastroenterology, The First Clinical Medical College of Zhejiang Chinese Medical University, Hangzhou 310053, Zhejiang Province, China
| | - Bin Lv
- Department of Gastroenterology, The First Clinical Medical College of Zhejiang Chinese Medical University, Hangzhou 310053, Zhejiang Province, China
- Department of Gastroenterology, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou 310006, Zhejiang Province, China
| |
Collapse
|
203
|
Gholami E, Kamel Tabbakh SR, kheirabadi M. Increasing the accuracy in the diagnosis of stomach cancer based on color and lint features of tongue. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102782] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
204
|
Uema R, Hayashi Y, Tashiro T, Saiki H, Kato M, Amano T, Tani M, Yoshihara T, Inoue T, Kimura K, Iwatani S, Sakatani A, Yoshii S, Tsujii Y, Shinzaki S, Iijima H, Takehara T. Use of a convolutional neural network for classifying microvessels of superficial esophageal squamous cell carcinomas. J Gastroenterol Hepatol 2021; 36:2239-2246. [PMID: 33694189 DOI: 10.1111/jgh.15479] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 02/18/2021] [Accepted: 02/22/2021] [Indexed: 12/21/2022]
Abstract
BACKGROUND AND AIM The morphological diagnosis of microvessels on the surface of superficial esophageal squamous cell carcinomas using magnifying endoscopy with narrow-band imaging is widely used in clinical practice. Nevertheless, inconsistency, even among experts, remains a problem. We constructed a convolutional neural network-based computer-aided diagnosis system to classify the microvessels of superficial esophageal squamous cell carcinomas and evaluated its diagnostic performance. METHODS In this retrospective study, a cropped magnifying endoscopy with narrow-band images from superficial esophageal squamous cell carcinoma lesions was used as the dataset. All images were assessed by three experts, and classified into three classes, Type B1, B2, and B3, based on the Japan Esophagus Society classification. The dataset was divided into training and validation datasets. A convolutional neural network model (ResNeXt-101) was trained and tuned with the training dataset. To evaluate diagnostic accuracy, the validation dataset was assessed by the computer-aided diagnosis system and eight endoscopists. RESULTS In total, 1777 and 747 cropped images (total, 393 lesions) were included in the training and validation datasets, respectively. The diagnosis system took 20.3 s to evaluate the 747 images in the validation dataset. The microvessel classification accuracy of the computer-aided diagnosis system was 84.2%, which was higher than the average of the eight endoscopists (77.8%, P < 0.001). The area under the receiver operating characteristic curves for diagnosing Type B1, B2, and B3 vessels were 0.969, 0.948, and 0.973, respectively. CONCLUSIONS The computer-aided diagnosis system showed remarkable performance in the classification of microvessels on superficial esophageal squamous cell carcinomas.
Collapse
Affiliation(s)
- Ryotaro Uema
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Yoshito Hayashi
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Taku Tashiro
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Hirotsugu Saiki
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Minoru Kato
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Takahiro Amano
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Mizuki Tani
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Takeo Yoshihara
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Takanori Inoue
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Keiichi Kimura
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Shuko Iwatani
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Akihiko Sakatani
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Shunsuke Yoshii
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Yoshiki Tsujii
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Shinichiro Shinzaki
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Hideki Iijima
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Tetsuo Takehara
- Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine, Osaka, Japan
| |
Collapse
|
205
|
Anisuzzaman D, Barzekar H, Tong L, Luo J, Yu Z. A deep learning study on osteosarcoma detection from histological images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102931] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
206
|
Tokat M, van Tilburg L, Koch AD, Spaander MCW. Artificial Intelligence in Upper Gastrointestinal Endoscopy. Dig Dis 2021; 40:395-408. [PMID: 34348267 DOI: 10.1159/000518232] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 06/23/2021] [Indexed: 02/02/2023]
Abstract
BACKGROUND Over the past decade, several artificial intelligence (AI) systems are developed to assist in endoscopic assessment of (pre-)cancerous lesions of the gastrointestinal (GI) tract. In this review, we aimed to provide an overview of the possible indications of AI technology in upper GI endoscopy and hypothesize about potential challenges for its use in clinical practice. SUMMARY Application of AI in upper GI endoscopy has been investigated for several indications: (1) detection, characterization, and delineation of esophageal and gastric cancer (GC) and their premalignant conditions; (2) prediction of tumor invasion; and (3) detection of Helicobacter pylori. AI systems show promising results with an accuracy of up to 99% for the detection of superficial and advanced upper GI cancers. AI outperformed trainee and experienced endoscopists for the detection of esophageal lesions and atrophic gastritis. For GC, AI outperformed mid-level and trainee endoscopists but not expert endoscopists. KEY MESSAGES Application of artificial intelligence (AI) in upper gastrointestinal endoscopy may improve early diagnosis of esophageal and gastric cancer and may enable endoscopists to better identify patients eligible for endoscopic resection. The benefit of AI on the quality of upper endoscopy still needs to be demonstrated, while prospective trials are needed to confirm accuracy and feasibility during real-time daily endoscopy.
Collapse
Affiliation(s)
- Meltem Tokat
- Department of Gastroenterology and Hepatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Laurelle van Tilburg
- Department of Gastroenterology and Hepatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Arjun D Koch
- Department of Gastroenterology and Hepatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Manon C W Spaander
- Department of Gastroenterology and Hepatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
207
|
Berbís MA, Aneiros-Fernández J, Mendoza Olivares FJ, Nava E, Luna A. Role of artificial intelligence in multidisciplinary imaging diagnosis of gastrointestinal diseases. World J Gastroenterol 2021; 27:4395-4412. [PMID: 34366612 PMCID: PMC8316909 DOI: 10.3748/wjg.v27.i27.4395] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 04/14/2021] [Accepted: 06/07/2021] [Indexed: 02/06/2023] Open
Abstract
The use of artificial intelligence-based tools is regarded as a promising approach to increase clinical efficiency in diagnostic imaging, improve the interpretability of results, and support decision-making for the detection and prevention of diseases. Radiology, endoscopy and pathology images are suitable for deep-learning analysis, potentially changing the way care is delivered in gastroenterology. The aim of this review is to examine the key aspects of different neural network architectures used for the evaluation of gastrointestinal conditions, by discussing how different models behave in critical tasks, such as lesion detection or characterization (i.e. the distinction between benign and malignant lesions of the esophagus, the stomach and the colon). To this end, we provide an overview on recent achievements and future prospects in deep learning methods applied to the analysis of radiology, endoscopy and histologic whole-slide images of the gastrointestinal tract.
Collapse
Affiliation(s)
| | - José Aneiros-Fernández
- Department of Pathology, Hospital Universitario Clínico San Cecilio, Granada 18012, Spain
| | | | - Enrique Nava
- Department of Communications Engineering, University of Málaga, Malaga 29016, Spain
| | - Antonio Luna
- MRI Unit, Department of Radiology, HT Médica, Jaén 23007, Spain
| |
Collapse
|
208
|
Durak S, Bayram B, Bakırman T, Erkut M, Doğan M, Gürtürk M, Akpınar B. Deep neural network approaches for detecting gastric polyps in endoscopic images. Med Biol Eng Comput 2021; 59:1563-1574. [PMID: 34259974 DOI: 10.1007/s11517-021-02398-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 06/18/2021] [Indexed: 12/18/2022]
Abstract
Gastrointestinal endoscopy is the primary method used for the diagnosis and treatment of gastric polyps. The early detection and removal of polyps is vitally important in preventing cancer development. Many studies indicate that a high workload can contribute to misdiagnosing gastric polyps, even for experienced physicians. In this study, we aimed to establish a deep learning-based computer-aided diagnosis system for automatic gastric polyp detection. A private gastric polyp dataset was generated for this purpose consisting of 2195 endoscopic images and 3031 polyp labels. Retrospective gastrointestinal endoscopy data from the Karadeniz Technical University, Farabi Hospital, were used in the study. YOLOv4, CenterNet, EfficientNet, Cross Stage ResNext50-SPP, YOLOv3, YOLOv3-SPP, Single Shot Detection, and Faster Regional CNN deep learning models were implemented and assessed to determine the most efficient model for precancerous gastric polyp detection. The dataset was split 70% and 30% for training and testing all the implemented models. YOLOv4 was determined to be the most accurate model, with an 87.95% mean average precision. We also evaluated all the deep learning models using a public gastric polyp dataset as the test data. The results show that YOLOv4 has significant potential applicability in detecting gastric polyps and can be used effectively in gastrointestinal CAD systems. Gastric Polyp Detection Process using Deep Learning with Private Dataset.
Collapse
Affiliation(s)
- Serdar Durak
- Faculty of Medicine, Department of Gastroenterology, Karadeniz Technical University, Trabzon, Turkey
| | - Bülent Bayram
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Tolga Bakırman
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey.
| | - Murat Erkut
- Faculty of Medicine, Department of Gastroenterology, Karadeniz Technical University, Trabzon, Turkey
| | - Metehan Doğan
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Mert Gürtürk
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Burak Akpınar
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| |
Collapse
|
209
|
Recognizing Gastrointestinal Malignancies on WCE and CCE Images by an Ensemble of Deep and Handcrafted Features with Entropy and PCA Based Features Optimization. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10481-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
210
|
Performance of a deep learning-based identification system for esophageal cancer from CT images. Esophagus 2021; 18:612-620. [PMID: 33635412 DOI: 10.1007/s10388-021-00826-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 02/08/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND Because cancers of hollow organs such as the esophagus are hard to detect even by the expert physician, it is important to establish diagnostic systems to support physicians and increase the accuracy of diagnosis. In recent years, deep learning-based artificial intelligence (AI) technology has been employed for medical image recognition. However, no optimal CT diagnostic system employing deep learning technology has been attempted and established for esophageal cancer so far. PURPOSE To establish an AI-based diagnostic system for esophageal cancer from CT images. MATERIALS AND METHODS In this single-center, retrospective cohort study, 457 patients with primary esophageal cancer referred to our division between 2005 and 2018 were enrolled. We fine-tuned VGG16, an image recognition model of deep learning convolutional neural network (CNN), for the detection of esophageal cancer. We evaluated the diagnostic accuracy of the CNN using a test data set including 46 cancerous CT images and 100 non-cancerous images and compared it to that of two radiologists. RESULTS Pre-treatment esophageal cancer stages of the patients included in the test data set were clinical T1 (12 patients), clinical T2 (9 patients), clinical T3 (20 patients), and clinical T4 (5 patients). The CNN-based system showed a diagnostic accuracy of 84.2%, F value of 0.742, sensitivity of 71.7%, and specificity of 90.0%. CONCLUSIONS Our AI-based diagnostic system succeeded in detecting esophageal cancer with high accuracy. More training with vast datasets collected from multiples centers would lead to even higher diagnostic accuracy and aid better decision making.
Collapse
|
211
|
Zhang L, Zhang Y, Wang L, Wang J, Liu Y. Diagnosis of gastric lesions through a deep convolutional neural network. Dig Endosc 2021; 33:788-796. [PMID: 32961597 DOI: 10.1111/den.13844] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 08/27/2020] [Accepted: 09/14/2020] [Indexed: 12/13/2022]
Abstract
BACKGROUND AND AIMS A deep convolutional neural network (CNN) was used to achieve fast and accurate artificial intelligence (AI)-assisted diagnosis of early gastric cancer (GC) and other gastric lesions based on endoscopic images. METHODS A CNN-based diagnostic system based on a ResNet34 residual network structure and a DeepLabv3 structure was constructed and trained using 21,217 gastroendoscopic images of five gastric conditions, peptic ulcer (PU), early gastric cancer (EGC) and high-grade intraepithelial neoplasia (HGIN), advanced gastric cancer (AGC), gastric submucosal tumors (SMTs), and normal gastric mucosa without lesions. The trained CNN was evaluated using a test dataset of 1091 images. The accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the CNN were calculated. The CNN diagnosis was compared with those of 10 endoscopists with over 8 years of experience in endoscopic diagnosis. RESULTS The diagnostic specificity and PPV of the CNN were higher than that of the endoscopists for the EGC and HGIN images (specificity: 91.2% vs. 86.7%, by 4.5%, 95% CI 2.8-7.2%; PPV: 55.4% vs. 41.7%, by 13.7%, 95% CI 11.2-16.8%) and the diagnostic accuracy of the CNN was close to those of the endoscopists for the lesion-free, EGC and HGIN, PU, AGC, and SMTs images. The CNN had image recognition time of 42 s for all the test set images. CONCLUSION The constructed CNN system could be used as a rapid auxiliary diagnostic instrument to detect EGC and HGIN, as well as other gastric lesions, to reduce the workload of endoscopists.
Collapse
Affiliation(s)
- Liming Zhang
- Department of Gastroenterology, Peking University People's Hospital, Beijing, China
| | - Yang Zhang
- Internet Medical Department of Love Life Insurance Company, Beijing, China
| | - Li Wang
- Department of Gastroenterology, Peking University People's Hospital, Beijing, China
| | - Jiangyuan Wang
- Department of Gastroenterology, Peking University People's Hospital, Beijing, China
| | - Yulan Liu
- Department of Gastroenterology, Peking University People's Hospital, Beijing, China
| |
Collapse
|
212
|
Abstract
This article explores advances in endoscopic neoplasia detection with supporting clinical evidence and future aims. The ability to detect early gastric neoplastic lesions amenable to curative endoscopic submucosal dissection provides the opportunity to decrease gastric cancer mortality rates. Newer imaging techniques offer enhanced views of mucosal and microvascular structures and show promise in differentiating benign from malignant lesions and improving targeted biopsies. Conventional chromoendoscopy is well studied and validated. Narrow band imaging demonstrates superiority over magnified white light. Autofluorescence imaging, i-scan, flexible spectral imaging color enhancement, and bright image enhanced endoscopy show promise but insufficient evidence to change current clinical practice.
Collapse
Affiliation(s)
- Andrew Canakis
- Department of Medicine, Boston University School of Medicine, Boston Medical Center, 72 East Concord Street, Evans 124, Boston, MA 02118, USA. https://twitter.com/AndrewCanakis
| | - Raymond Kim
- Division of Gastroenterology & Hepatology, University of Maryland Medical Center, University of Maryland School of Medicine, 22 South Greene Street, Baltimore, MD 21201, USA.
| |
Collapse
|
213
|
Yu H, Yang LT, Zhang Q, Armstrong D, Deen MJ. Convolutional neural networks for medical image analysis: State-of-the-art, comparisons, improvement and perspectives. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.04.157] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
214
|
Feng XY, Xu X, Zhang Y, Xu YM, She Q, Deng B. Application of convolutional neural network in detecting and classifying gastric cancer. Artif Intell Gastrointest Endosc 2021. [DOI: 10.37126/aige.v2.i3.70] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
|
215
|
Kim JH, Nam SJ, Park SC. Usefulness of artificial intelligence in gastric neoplasms. World J Gastroenterol 2021; 27:3543-3555. [PMID: 34239268 PMCID: PMC8240061 DOI: 10.3748/wjg.v27.i24.3543] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 04/09/2021] [Accepted: 05/21/2021] [Indexed: 02/06/2023] Open
Abstract
Recently, studies in many medical fields have reported that image analysis based on artificial intelligence (AI) can be used to analyze structures or features that are difficult to identify with human eyes. To diagnose early gastric cancer, related efforts such as narrow-band imaging technology are on-going. However, diagnosis is often difficult. Therefore, a diagnostic method based on AI for endoscopic imaging was developed and its effectiveness was confirmed in many studies. The gastric cancer diagnostic program based on AI showed relatively high diagnostic accuracy and could differentially diagnose non-neoplastic lesions including benign gastric ulcers and dysplasia. An AI system has also been developed that helps to predict the invasion depth of gastric cancer through endoscopic images and observe the stomach during endoscopy without blind spots. Therefore, if AI is used in the field of endoscopy, it is expected to aid in the diagnosis of gastric neoplasms and determine the application of endoscopic therapy by predicting the invasion depth.
Collapse
Affiliation(s)
- Ji Hyun Kim
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Kangwon National University School of Medicine, Chuncheon 24289, Kangwon Do, South Korea
| | - Seung-Joo Nam
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Kangwon National University School of Medicine, Chuncheon 24289, Kangwon Do, South Korea
| | - Sung Chul Park
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Kangwon National University School of Medicine, Chuncheon 24289, Kangwon Do, South Korea
| |
Collapse
|
216
|
Feng XY, Xu X, Zhang Y, Xu YM, She Q, Deng B. Application of convolutional neural network in detecting and classifying gastric cancer. Artif Intell Gastrointest Endosc 2021; 2:71-78. [DOI: 10.37126/aige.v2.i3.71] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 05/21/2021] [Accepted: 06/07/2021] [Indexed: 02/06/2023] Open
Abstract
Gastric cancer (GC) is the fifth most common cancer in the world, and at present, esophagogastroduodenoscopy is recognized as an acceptable method for the screening and monitoring of GC. Convolutional neural networks (CNNs) are a type of deep learning model and have been widely used for image analysis. This paper reviews the application and prospects of CNNs in detecting and classifying GC, aiming to introduce a computer-aided diagnosis system and to provide evidence for subsequent studies.
Collapse
Affiliation(s)
- Xin-Yi Feng
- Department of Gastroenterology, Affiliated Hospital of Yangzhou University, Yangzhou 225000, Jiangsu Province, China
| | - Xi Xu
- Department of Gastroenterology, Affiliated Hospital of Yangzhou University, Yangzhou 225000, Jiangsu Province, China
| | - Yun Zhang
- Department of Gastroenterology, Affiliated Hospital of Yangzhou University, Yangzhou 225000, Jiangsu Province, China
| | - Ye-Min Xu
- Department of Gastroenterology, Affiliated Hospital of Yangzhou University, Yangzhou 225000, Jiangsu Province, China
| | - Qiang She
- Department of Gastroenterology, Affiliated Hospital of Yangzhou University, Yangzhou 225000, Jiangsu Province, China
| | - Bin Deng
- Department of Gastroenterology, Affiliated Hospital of Yangzhou University, Yangzhou 225000, Jiangsu Province, China
| |
Collapse
|
217
|
Tanabe S, Perkins EJ, Ono R, Sasaki H. Artificial intelligence in gastrointestinal diseases. Artif Intell Gastroenterol 2021; 2:69-76. [DOI: 10.35712/aig.v2.i3.69] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 04/09/2021] [Accepted: 06/04/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) applications are growing in medicine. It is important to understand the current state of the AI applications prior to utilizing in disease research and treatment. In this review, AI application in the diagnosis and treatment of gastrointestinal diseases are studied and summarized. In most cases, AI studies had large amounts of data, including images, to learn to distinguish disease characteristics according to a human’s perspectives. The detailed pros and cons of utilizing AI approaches should be investigated in advance to ensure the safe application of AI in medicine. Evidence suggests that the collaborative usage of AI in both diagnosis and treatment of diseases will increase the precision and effectiveness of medicine. Recent progress in genome technology such as genome editing provides a specific example where AI has revealed the diagnostic and therapeutic possibilities of RNA detection and targeting.
Collapse
Affiliation(s)
- Shihori Tanabe
- Division of Risk Assessment, Center for Biological Safety and Research, National Institute of Health Sciences, Kawasaki 210-9501, Japan
| | - Edward J Perkins
- Environmental Laboratory, US Army Engineer Research and Development Center, Vicksburg, MS 3180, United States
| | - Ryuichi Ono
- Division of Cellular and Molecular Toxicology, Center for Biological Safety and Research, National Institute of Health Sciences, Kawasaki 210-9501, Japan
| | - Hiroki Sasaki
- Department of Clinical Genomics, Fundamental Innovative Oncology Core, National Cancer Center Research Institute, Tokyo 104-0045, Japan
| |
Collapse
|
218
|
Bamba Y, Ogawa S, Itabashi M, Shindo H, Kameoka S, Okamoto T, Yamamoto M. Object and anatomical feature recognition in surgical video images based on a convolutional neural network. Int J Comput Assist Radiol Surg 2021; 16:2045-2054. [PMID: 34169465 PMCID: PMC8224261 DOI: 10.1007/s11548-021-02434-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 06/17/2021] [Indexed: 12/14/2022]
Abstract
Purpose Artificial intelligence-enabled techniques can process large amounts of surgical data and may be utilized for clinical decision support to recognize or forecast adverse events in an actual intraoperative scenario. To develop an image-guided navigation technology that will help in surgical education, we explored the performance of a convolutional neural network (CNN)-based computer vision system in detecting intraoperative objects. Methods The surgical videos used for annotation were recorded during surgeries conducted in the Department of Surgery of Tokyo Women’s Medical University from 2019 to 2020. Abdominal endoscopic images were cut out from manually captured surgical videos. An open-source programming framework for CNN was used to design a model that could recognize and segment objects in real time through IBM Visual Insights. The model was used to detect the GI tract, blood, vessels, uterus, forceps, ports, gauze and clips in the surgical images. Results The accuracy, precision and recall of the model were 83%, 80% and 92%, respectively. The mean average precision (mAP), the calculated mean of the precision for each object, was 91%. Among surgical tools, the highest recall and precision of 96.3% and 97.9%, respectively, were achieved for forceps. Among the anatomical structures, the highest recall and precision of 92.9% and 91.3%, respectively, were achieved for the GI tract. Conclusion The proposed model could detect objects in operative images with high accuracy, highlighting the possibility of using AI-based object recognition techniques for intraoperative navigation. Real-time object recognition will play a major role in navigation surgery and surgical education. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-021-02434-w.
Collapse
Affiliation(s)
- Yoshiko Bamba
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan.
| | - Shimpei Ogawa
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| | - Michio Itabashi
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| | | | | | - Takahiro Okamoto
- Department of Breast Endocrinology Surgery, Tokyo Women's Medical University, Tokyo, Japan
| | - Masakazu Yamamoto
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| |
Collapse
|
219
|
Hsiao YJ, Wen YC, Lai WY, Lin YY, Yang YP, Chien Y, Yarmishyn AA, Hwang DK, Lin TC, Chang YC, Lin TY, Chang KJ, Chiou SH, Jheng YC. Application of artificial intelligence-driven endoscopic screening and diagnosis of gastric cancer. World J Gastroenterol 2021; 27:2979-2993. [PMID: 34168402 PMCID: PMC8192292 DOI: 10.3748/wjg.v27.i22.2979] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 03/10/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
The landscape of gastrointestinal endoscopy continues to evolve as new technologies and techniques become available. The advent of image-enhanced and magnifying endoscopies has highlighted the step toward perfecting endoscopic screening and diagnosis of gastric lesions. Simultaneously, with the development of convolutional neural network, artificial intelligence (AI) has made unprecedented breakthroughs in medical imaging, including the ongoing trials of computer-aided detection of colorectal polyps and gastrointestinal bleeding. In the past demi-decade, applications of AI systems in gastric cancer have also emerged. With AI’s efficient computational power and learning capacities, endoscopists can improve their diagnostic accuracies and avoid the missing or mischaracterization of gastric neoplastic changes. So far, several AI systems that incorporated both traditional and novel endoscopy technologies have been developed for various purposes, with most systems achieving an accuracy of more than 80%. However, their feasibility, effectiveness, and safety in clinical practice remain to be seen as there have been no clinical trials yet. Nonetheless, AI-assisted endoscopies shed light on more accurate and sensitive ways for early detection, treatment guidance and prognosis prediction of gastric lesions. This review summarizes the current status of various AI applications in gastric cancer and pinpoints directions for future research and clinical practice implementation from a clinical perspective.
Collapse
Affiliation(s)
- Yu-Jer Hsiao
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Yuan-Chih Wen
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Medical Education, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | - Wei-Yi Lai
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Pharmacology, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Yi-Ying Lin
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Pharmacology, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Yi-Ping Yang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Internal Medicine, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Critical Center, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | - Yueh Chien
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | | | - De-Kuang Hwang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Institute of Clinical Medicine, National Yang-Ming Chiao Tung University, Taipei 112201, Taiwan
| | - Tai-Chi Lin
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Institute of Clinical Medicine, National Yang-Ming Chiao Tung University, Taipei 112201, Taiwan
| | - Yun-Chia Chang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | - Ting-Yi Lin
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Department of Medicine, Kaohsiung Medical University, Kaohsiung 80708, Taiwan
| | - Kao-Jung Chang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Clinical Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Shih-Hwa Chiou
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Institute of Pharmacology, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Clinical Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Ying-Chun Jheng
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Big Data Center, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| |
Collapse
|
220
|
Bang CS. [Deep Learning in Upper Gastrointestinal Disorders: Status and Future Perspectives]. THE KOREAN JOURNAL OF GASTROENTEROLOGY 2021; 75:120-131. [PMID: 32209800 DOI: 10.4166/kjg.2020.75.3.120] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 03/01/2020] [Accepted: 03/02/2020] [Indexed: 12/18/2022]
Abstract
Artificial intelligence using deep learning has been applied to gastrointestinal disorders for the detection, classification, and delineation of various lesion images. With the accumulation of enormous medical records, the evolution of computation power with graphic processing units, and the widespread use of open-source libraries in large-scale machine learning processes, medical artificial intelligence is overcoming its traditional limitations. This paper explains the basic concepts of deep learning model establishment and summarizes previous studies on upper gastrointestinal disorders. The limitations and perspectives on future development are also discussed.
Collapse
Affiliation(s)
- Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Korea
| |
Collapse
|
221
|
Cao B, Zhang KC, Wei B, Chen L. Status quo and future prospects of artificial neural network from the perspective of gastroenterologists. World J Gastroenterol 2021; 27:2681-2709. [PMID: 34135549 PMCID: PMC8173384 DOI: 10.3748/wjg.v27.i21.2681] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/29/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN’s clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public.
Collapse
Affiliation(s)
- Bo Cao
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ke-Cheng Zhang
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Bo Wei
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Lin Chen
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| |
Collapse
|
222
|
Murakami D, Yamato M, Amano Y, Tada T. Challenging detection of hard-to-find gastric cancers with artificial intelligence-assisted endoscopy. Gut 2021; 70:1196-1198. [PMID: 32816967 PMCID: PMC8108284 DOI: 10.1136/gutjnl-2020-322453] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Revised: 07/20/2020] [Accepted: 07/21/2020] [Indexed: 12/29/2022]
Affiliation(s)
- Daisuke Murakami
- Department of Gastroenterology, New Tokyo Hospital, Chiba, Japan .,Institute of Advanced Biomedical Engineering and Science, Tokyo Women's Medical University, Tokyo, Japan
| | - Masayuki Yamato
- Institute of Advanced Biomedical Engineering and Science, Tokyo Women's Medical University, Tokyo, Japan
| | - Yuji Amano
- Department of Endoscopy, New Tokyo Hospital, Chiba, Japan
| | - Tomohiro Tada
- Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan,Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
223
|
Lazăr DC, Avram MF, Faur AC, Romoşan I, Goldiş A. The role of computer-assisted systems for upper-endoscopy quality monitoring and assessment of gastric lesions. Gastroenterol Rep (Oxf) 2021; 9:185-204. [PMID: 34316369 PMCID: PMC8309682 DOI: 10.1093/gastro/goab008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 12/05/2020] [Accepted: 12/20/2020] [Indexed: 12/24/2022] Open
Abstract
This article analyses the literature regarding the value of computer-assisted systems in esogastroduodenoscopy-quality monitoring and the assessment of gastric lesions. Current data show promising results in upper-endoscopy quality control and a satisfactory detection accuracy of gastric premalignant and malignant lesions, similar or even exceeding that of experienced endoscopists. Moreover, artificial systems enable the decision for the best treatment strategies in gastric-cancer patient care, namely endoscopic vs surgical resection according to tumor depth. In so doing, unnecessary surgical interventions would be avoided whilst providing a better quality of life and prognosis for these patients. All these performance data have been revealed by numerous studies using different artificial intelligence (AI) algorithms in addition to white-light endoscopy or novel endoscopic techniques that are available in expert endoscopy centers. It is expected that ongoing clinical trials involving AI and the embedding of computer-assisted diagnosis systems into endoscopic devices will enable real-life implementation of AI endoscopic systems in the near future and at the same time will help to overcome the current limits of the computer-assisted systems leading to an improvement in performance. These benefits should lead to better diagnostic and treatment strategies for gastric-cancer patients. Furthermore, the incorporation of AI algorithms in endoscopic tools along with the development of large electronic databases containing endoscopic images might help in upper-endoscopy assistance and could be used for telemedicine purposes and second opinion for difficult cases.
Collapse
Affiliation(s)
- Daniela Cornelia Lazăr
- Department V of Internal Medicine I, Discipline of Internal Medicine IV, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania,Timișoara, Romania
| | - Mihaela Flavia Avram
- Department of Surgery X, 1st Surgery Discipline, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Timișoara, Romania
| | - Alexandra Corina Faur
- Department I, Discipline of Anatomy and Embriology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Timișoara, Romania
| | - Ioan Romoşan
- Department V of Internal Medicine I, Discipline of Internal Medicine IV, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania,Timișoara, Romania
| | - Adrian Goldiş
- Department VII of Internal Medicine II, Discipline of Gastroenterology and Hepatology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Timișoara, Romania
| |
Collapse
|
224
|
Zhang M, Zhu C, Wang Y, Kong Z, Hua Y, Zhang W, Si X, Ye B, Xu X, Li L, Heng D, Liu B, Tian S, Wu J, Dang Y, Zhang G. Differential diagnosis for esophageal protruded lesions using a deep convolution neural network in endoscopic images. Gastrointest Endosc 2021; 93:1261-1272.e2. [PMID: 33065026 DOI: 10.1016/j.gie.2020.10.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 10/01/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS Recent advances in deep convolutional neural networks (CNNs) have led to remarkable results in digestive endoscopy. In this study, we aimed to develop CNN-based models for the differential diagnosis of benign esophageal protruded lesions using endoscopic images acquired during real clinical settings. METHODS We retrospectively reviewed the images from 1217 patients who underwent white-light endoscopy (WLE) and EUS between January 2015 and April 2020. Three deep CNN models were developed to accomplish the following tasks: (1) identification of esophageal benign lesions from healthy controls using WLE images; (2) differentiation of 3 subtypes of esophageal protruded lesions (including esophageal leiomyoma [EL], esophageal cyst (EC], and esophageal papilloma [EP]) using WLE images; and (3) discrimination between EL and EC using EUS images. Six endoscopists blinded to the patients' clinical status were enrolled to interpret all images independently. Their diagnostic performances were evaluated and compared with the CNN models using the area under the receiver operating characteristic curve (AUC). RESULTS For task 1, the CNN model achieved an AUC of 0.751 (95% confidence interval [CI], 0.652-0.850) in identifying benign esophageal lesions. For task 2, the proposed model using WLE images for differentiation of esophageal protruded lesions achieved an AUC of 0.907 (95% CI, 0.835-0.979), 0.897 (95% CI, 0.841-0.953), and 0.868 (95% CI, 0.769-0.968) for EP, EL, and EC, respectively. The CNN model achieved equivalent or higher identification accuracy for EL and EC compared with skilled endoscopists. In the task of discriminating EL from EC (task 3), the proposed CNN model had AUC values of 0.739 (EL, 95% CI, 0.600-0.878) and 0.724 (EC, 95% CI, 0.567-0.881), which outperformed seniors and novices. Attempts to combine the CNN and endoscopist predictions led to significantly improved diagnostic accuracy compared with endoscopists interpretations alone. CONCLUSIONS Our team established CNN-based methodologies to recognize benign esophageal protruded lesions using routinely obtained WLE and EUS images. Preliminary results combining the results from the models and the endoscopists underscored the potential of ensemble models for improved differentiation of lesions in real endoscopic settings.
Collapse
Affiliation(s)
- Min Zhang
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Chang Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Yun Wang
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Zihao Kong
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Yifei Hua
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Weifeng Zhang
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Xinmin Si
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Bixing Ye
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Xiaobing Xu
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Lurong Li
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Ding Heng
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | | | | | | | - Yini Dang
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Guoxin Zhang
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
225
|
Naz J, Sharif M, Yasmin M, Raza M, Khan MA. Detection and Classification of Gastrointestinal Diseases using Machine Learning. Curr Med Imaging 2021; 17:479-490. [PMID: 32988355 DOI: 10.2174/1573405616666200928144626] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Traditional endoscopy is an invasive and painful method of examining the gastrointestinal tract (GIT) not supported by physicians and patients. To handle this issue, video endoscopy (VE) or wireless capsule endoscopy (WCE) is recommended and utilized for GIT examination. Furthermore, manual assessment of captured images is not possible for an expert physician because it's a time taking task to analyze thousands of images thoroughly. Hence, there comes the need for a Computer-Aided-Diagnosis (CAD) method to help doctors analyze images. Many researchers have proposed techniques for automated recognition and classification of abnormality in captured images. METHODS In this article, existing methods for automated classification, segmentation and detection of several GI diseases are discussed. Paper gives a comprehensive detail about these state-of-theart methods. Furthermore, literature is divided into several subsections based on preprocessing techniques, segmentation techniques, handcrafted features based techniques and deep learning based techniques. Finally, issues, challenges and limitations are also undertaken. RESULTS A comparative analysis of different approaches for the detection and classification of GI infections. CONCLUSION This comprehensive review article combines information related to a number of GI diseases diagnosis methods at one place. This article will facilitate the researchers to develop new algorithms and approaches for early detection of GI diseases detection with more promising results as compared to the existing ones of literature.
Collapse
Affiliation(s)
- Javeria Naz
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | | |
Collapse
|
226
|
Application of Convolutional Neural Networks for Detection of Superficial Nonampullary Duodenal Epithelial Tumors in Esophagogastroduodenoscopic Images. Clin Transl Gastroenterol 2021; 11:e00154. [PMID: 32352719 PMCID: PMC7145048 DOI: 10.14309/ctg.0000000000000154] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
A superficial nonampullary duodenal epithelial tumor (SNADET) is defined as a mucosal or submucosal sporadic tumor of the duodenum that does not arise from the papilla of Vater. SNADETs rarely metastasize to the lymph nodes, and most can be treated endoscopically. However, SNADETs are sometimes missed during esophagogastroduodenoscopic examination. In this study, we constructed a convolutional neural network (CNN) and evaluated its ability to detect SNADETs.
Collapse
|
227
|
Klang E, Barash Y, Levartovsky A, Barkin Lederer N, Lahat A. Differentiation Between Malignant and Benign Endoscopic Images of Gastric Ulcers Using Deep Learning. Clin Exp Gastroenterol 2021; 14:155-162. [PMID: 33981151 PMCID: PMC8107004 DOI: 10.2147/ceg.s292857] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 04/04/2021] [Indexed: 12/26/2022] Open
Abstract
Background and Aim Endoscopic differentiation between malignant and benign gastric ulcers (GU) affects further evaluation and prognosis. The aim of our study was to evaluate a deep learning algorithm for discrimination between benign and malignant GU in a database of endoscopic ulcer images. Methods We retrospectively collected consecutive upper gastrointestinal endoscopy images of GU performed between 2011 and 2019 at the Sheba Medical Center. All ulcers had a corresponding histopathology result of either benign peptic ulcer or gastric adenocarcinoma. A convolutional neural network (CNN) was trained to classify the images into either benign or malignant. Endoscopies from 2011 to 2017 were used for training (2011-2015) and validation (2016-2017). Hyper-parameters, image augmentation and pre-training on Google images obtained images were evaluated on the validation data. Held-out data from 2018 to 2019 was used for testing the final model. Results Overall, the Sheba dataset included 1978 GU images; 1894 images from benign GU and 84 images of malignant ulcers. The final CNN model showed an AUC 0.91 (95% CI 0.85-0.96) for detecting malignant ulcers. For cut-off probability 0.5, the network showed a sensitivity of 92% and specificity of 75% for malignant ulcers. Conclusion Our study displays the applicability of a CNN model for automated evaluation of gastric ulcers images for malignant potential. Following further research, the algorithm may improve accuracy of differentiating benign from malignant ulcers during endoscopies and assist in patients' stratification, allowing accelerated patients management and individualized approach towards surveillance endoscopy.
Collapse
Affiliation(s)
- Eyal Klang
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, Tel Hashomer, Israel.,Sackler Medical School, Tel Aviv University, Tel Aviv, Israel.,DeepVision Lab (3), Chaim Sheba Medical Center, Tel Hashomer, Israel
| | - Yiftach Barash
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, Tel Hashomer, Israel.,DeepVision Lab (3), Chaim Sheba Medical Center, Tel Hashomer, Israel
| | - Asaf Levartovsky
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel.,Department of Gastroenterology, Chaim Sheba Medical Center, Tel Hashomer, Israel
| | - Noam Barkin Lederer
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel.,Department of Gastroenterology, Chaim Sheba Medical Center, Tel Hashomer, Israel
| | - Adi Lahat
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel.,Department of Gastroenterology, Chaim Sheba Medical Center, Tel Hashomer, Israel
| |
Collapse
|
228
|
Su R, Liu J, Wu B, Xie Y, Zhang Y, Zhang W, Zhang Y, Wan M, Tian Z, Hu Y. Accurate measurement of colorectal polyps using computer-aided analysis. Eur J Gastroenterol Hepatol 2021; 33:701-708. [PMID: 33787542 DOI: 10.1097/meg.0000000000002162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
INTRODUCTION As we know, the majority of colorectal cancers are thought to evolve from colorectal adenomas. In this study, we explored the use of Computer-aided diagnosis (CAD) in the detection of colorectal polyps and the estimation of their sizes, which is important for the diagnosis and management of colorectal cancer. MATERIALS AND METHODS As the distance between colonoscopy and lesion increases, magnification tends to decrease. Therefore, the size of colorectal polyps can be calculated by taking into account the captured image and the shooting distance. In this study, the fitting curve of the magnification of electronic colonoscopy was obtained by simulating intestinal tract and polyps in vitro. Then, the distance was artificially controlled in the endoscopic operation, and the image was taken at a preset distance. The CAD system was then trained on the overall shape of colorectal polyps. Image segmentation was employed to accurately identify colorectal polyps. Finally, on the basis of the magnification factor, the real value of polyps was predicted from the shooting distance and the segmentation image size. RESULTS The CAD system can automatically calculate the range of colorectal polyps and calculate the true size of the colorectal polyps according to the magnification of the corresponding distance. CONCLUSIONS In this study, we developed a method of accurately estimating the size of colorectal polyps. This approach is compatible with many devices, which would expand its range of applications. This method has the potential for application in other areas of clinical diagnosis.
Collapse
Affiliation(s)
- Ruizhang Su
- The School of Clinical Medicine, Fujian Medical University, Fuzhou
| | - Jie Liu
- The School of Clinical Medicine, Fujian Medical University, Fuzhou
| | - Bifang Wu
- The School of Clinical Medicine, Fujian Medical University, Fuzhou
| | - Yun Xie
- Department of Gastroenterology, Zhongshan Hospital Xiamen University, Xiamen
| | - Yi Zhang
- Pucheng County Hospital of Traditional Chinese Medicine, Pucheng, Fujian Province, People's Republic of China
| | - Wen Zhang
- Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Yongxiu Zhang
- Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Man Wan
- Department of Gastroenterology, Zhongshan Hospital Xiamen University, Xiamen
| | - Zhaoxu Tian
- Department of Gastroenterology, Shenzhen Longgang District People's Hospital, Shenzhen, Guangdong, Province, People's Republic of China
| | - Yiqun Hu
- Department of Gastroenterology, Zhongshan Hospital Xiamen University, Xiamen
| |
Collapse
|
229
|
Hwang Y, Lee HH, Park C, Tama BA, Kim JS, Cheung DY, Chung WC, Cho YS, Lee KM, Choi MG, Lee S, Lee BI. Improved classification and localization approach to small bowel capsule endoscopy using convolutional neural network. Dig Endosc 2021; 33:598-607. [PMID: 32640059 DOI: 10.1111/den.13787] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 06/29/2020] [Accepted: 07/01/2020] [Indexed: 12/13/2022]
Abstract
BACKGROUND Although great advances in artificial intelligence for interpreting small bowel capsule endoscopy (SBCE) images have been made in recent years, its practical use is still limited. The aim of this study was to develop a more practical convolutional neural network (CNN) algorithm for the automatic detection of various small bowel lesions. METHODS A total of 7556 images were collected for the training dataset from 526 SBCE videos. Abnormal images were classified into two categories: hemorrhagic lesions (red spot/angioectasia/active bleeding) and ulcerative lesions (erosion/ulcer/stricture). A CNN algorithm based on VGGNet was trained in two different ways: the combined model (hemorrhagic and ulcerative lesions trained separately) and the binary model (all abnormal images trained without discrimination). The detected lesions were visualized using a gradient class activation map (Grad-CAM). The two models were validated using 5,760 independent images taken at two other academic hospitals. RESULTS Both the combined and binary models acquired high accuracy for lesion detection, and the difference between the two models was not significant (96.83% vs 96.62%, P = 0.122). However, the combined model showed higher sensitivity (97.61% vs 95.07%, P < 0.001) and higher accuracy for individual lesions from the hemorrhagic and ulcerative categories than the binary model. The combined model also revealed more accurate localization of the culprit area on images evaluated by the Grad-CAM. CONCLUSIONS Diagnostic sensitivity and classification of small bowel lesions using a convolutional neural network are improved by the independent training for hemorrhagic and ulcerative lesions. Grad-CAM is highly effective in localizing the lesions.
Collapse
Affiliation(s)
- Yunseob Hwang
- Department of Mechanical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Korea.,Postech-Catholic Biomedical Engineering Institute, Pohang University of Science and Technology (POSTECH), Pohang, Korea
| | - Han Hee Lee
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea.,Postech-Catholic Biomedical Engineering Institute, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Chunghyun Park
- Department of Mechanical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Korea
| | - Bayu Adhi Tama
- Department of Mechanical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Korea
| | - Jin Su Kim
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Dae Young Cheung
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Woo Chul Chung
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Young-Seok Cho
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Kang-Moon Lee
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Myung-Gyu Choi
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Seungchul Lee
- Department of Mechanical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Korea.,Postech-Catholic Biomedical Engineering Institute, Pohang University of Science and Technology (POSTECH), Pohang, Korea.,Graduate School of Artificial Intelligence, Pohang University of Science and Technology (POSTECH), Pohang, Korea
| | - Bo-In Lee
- Division of Gastroenterology, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| |
Collapse
|
230
|
Yahagi N. Is artificial intelligence ready to replace expert endoscopists? Endoscopy 2021; 53:478-479. [PMID: 33887778 DOI: 10.1055/a-1308-2121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Affiliation(s)
- Naohisa Yahagi
- Division of Research and Development for Minimally Invasive Treatment, Cancer Center, Keio University School of Medicine, Tokyo, Japan
| |
Collapse
|
231
|
Kono M, Ishihara R, Kato Y, Miyake M, Shoji A, Inoue T, Matsueda K, Waki K, Fukuda H, Shimamoto Y, Fujiwara Y, Tada T. Diagnosis of pharyngeal cancer on endoscopic video images by Mask region-based convolutional neural network. Dig Endosc 2021; 33:569-576. [PMID: 32715508 DOI: 10.1111/den.13800] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 07/17/2020] [Accepted: 07/20/2020] [Indexed: 12/11/2022]
Abstract
OBJECTIVES We aimed to develop an artificial intelligence (AI) system for the real-time diagnosis of pharyngeal cancers. METHODS Endoscopic video images and still images of pharyngeal cancer treated in our facility were collected. A total of 4559 images of pathologically proven pharyngeal cancer (1243 using white light imaging and 3316 using narrow-band imaging/blue laser imaging) from 276 patients were used as a training dataset. The AI system used a convolutional neural network (CNN) model typical of the type used to analyze visual imagery. Supervised learning was used to train the CNN. The AI system was evaluated using an independent validation dataset of 25 video images of pharyngeal cancer and 36 video images of normal pharynx taken at our hospital. RESULTS The AI system diagnosed 23/25 (92%) pharyngeal cancers as cancers and 17/36 (47%) non-cancers as non-cancers. The transaction speed of the AI system was 0.03 s per image, which meets the required speed for real-time diagnosis. The sensitivity, specificity, and accuracy for the detection of cancer were 92%, 47%, and 66% respectively. CONCLUSIONS Our single-institution study showed that our AI system for diagnosing cancers of the pharyngeal region had promising performance with high sensitivity and acceptable specificity. Further training and improvement of the system are required with a larger dataset including multiple centers.
Collapse
Affiliation(s)
- Mitsuhiro Kono
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan.,Department of Gastroenterology, Osaka City University Graduate School of Medicine, Osaka, Japan
| | - Ryu Ishihara
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | | | - Muneaki Miyake
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Ayaka Shoji
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Takahiro Inoue
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Katsunori Matsueda
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Kotaro Waki
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Hiromu Fukuda
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Yusaku Shimamoto
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Yasuhiro Fujiwara
- Department of Gastroenterology, Osaka City University Graduate School of Medicine, Osaka, Japan
| | - Tomohiro Tada
- AI Medical Service Inc., Tokyo, Japan.,Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.,Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
| |
Collapse
|
232
|
Li Y, Zhou D, Liu TT, Shen XZ. Application of deep learning in image recognition and diagnosis of gastric cancer. Artif Intell Gastrointest Endosc 2021; 2:12-24. [DOI: 10.37126/aige.v2.i2.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
In recent years, artificial intelligence has been extensively applied in the diagnosis of gastric cancer based on medical imaging. In particular, using deep learning as one of the mainstream approaches in image processing has made remarkable progress. In this paper, we also provide a comprehensive literature survey using four electronic databases, PubMed, EMBASE, Web of Science, and Cochrane. The literature search is performed until November 2020. This article provides a summary of the existing algorithm of image recognition, reviews the available datasets used in gastric cancer diagnosis and the current trends in applications of deep learning theory in image recognition of gastric cancer. covers the theory of deep learning on endoscopic image recognition. We further evaluate the advantages and disadvantages of the current algorithms and summarize the characteristics of the existing image datasets, then combined with the latest progress in deep learning theory, and propose suggestions on the applications of optimization algorithms. Based on the existing research and application, the label, quantity, size, resolutions, and other aspects of the image dataset are also discussed. The future developments of this field are analyzed from two perspectives including algorithm optimization and data support, aiming to improve the diagnosis accuracy and reduce the risk of misdiagnosis.
Collapse
Affiliation(s)
- Yu Li
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Da Zhou
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Tao-Tao Liu
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Xi-Zhong Shen
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| |
Collapse
|
233
|
Cao JS, Lu ZY, Chen MY, Zhang B, Juengpanich S, Hu JH, Li SJ, Topatana W, Zhou XY, Feng X, Shen JL, Liu Y, Cai XJ. Artificial intelligence in gastroenterology and hepatology: Status and challenges. World J Gastroenterol 2021; 27:1664-1690. [PMID: 33967550 PMCID: PMC8072192 DOI: 10.3748/wjg.v27.i16.1664] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 02/11/2021] [Accepted: 03/17/2021] [Indexed: 02/06/2023] Open
Abstract
Originally proposed by John McCarthy in 1955, artificial intelligence (AI) has achieved a breakthrough and revolutionized the processing methods of clinical medicine with the increasing workloads of medical records and digital images. Doctors are paying attention to AI technologies for various diseases in the fields of gastroenterology and hepatology. This review will illustrate AI technology procedures for medical image analysis, including data processing, model establishment, and model validation. Furthermore, we will summarize AI applications in endoscopy, radiology, and pathology, such as detecting and evaluating lesions, facilitating treatment, and predicting treatment response and prognosis with excellent model performance. The current challenges for AI in clinical application include potential inherent bias in retrospective studies that requires larger samples for validation, ethics and legal concerns, and the incomprehensibility of the output results. Therefore, doctors and researchers should cooperate to address the current challenges and carry out further investigations to develop more accurate AI tools for improved clinical applications.
Collapse
Affiliation(s)
- Jia-Sheng Cao
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Zi-Yi Lu
- Zhejiang University School of Medicine, Zhejiang University, Hangzhou 310058, Zhejiang Province, China
| | - Ming-Yu Chen
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Bin Zhang
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Sarun Juengpanich
- Zhejiang University School of Medicine, Zhejiang University, Hangzhou 310058, Zhejiang Province, China
| | - Jia-Hao Hu
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Shi-Jie Li
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Win Topatana
- Zhejiang University School of Medicine, Zhejiang University, Hangzhou 310058, Zhejiang Province, China
| | - Xue-Yin Zhou
- School of Medicine, Wenzhou Medical University, Wenzhou 325035, Zhejiang Province, China
| | - Xu Feng
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Ji-Liang Shen
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| | - Yu Liu
- College of Life Sciences, Zhejiang University, Hangzhou 310058, Zhejiang Province, China
| | - Xiu-Jun Cai
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
| |
Collapse
|
234
|
Tang D, Zhou J, Wang L, Ni M, Chen M, Hassan S, Luo R, Chen X, He X, Zhang L, Ding X, Yu H, Xu G, Zou X. A Novel Model Based on Deep Convolutional Neural Network Improves Diagnostic Accuracy of Intramucosal Gastric Cancer (With Video). Front Oncol 2021; 11:622827. [PMID: 33959495 PMCID: PMC8095170 DOI: 10.3389/fonc.2021.622827] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 03/23/2021] [Indexed: 12/15/2022] Open
Abstract
Background and Aims Prediction of intramucosal gastric cancer (GC) is a big challenge. It is not clear whether artificial intelligence could assist endoscopists in the diagnosis. Methods A deep convolutional neural networks (DCNN) model was developed via retrospectively collected 3407 endoscopic images from 666 gastric cancer patients from two Endoscopy Centers (training dataset). The DCNN model’s performance was tested with 228 images from 62 independent patients (testing dataset). The endoscopists evaluated the image and video testing dataset with or without the DCNN model’s assistance, respectively. Endoscopists’ diagnostic performance was compared with or without the DCNN model’s assistance and investigated the effects of assistance using correlations and linear regression analyses. Results The DCNN model discriminated intramucosal GC from advanced GC with an AUC of 0.942 (95% CI, 0.915–0.970), a sensitivity of 90.5% (95% CI, 84.1%–95.4%), and a specificity of 85.3% (95% CI, 77.1%–90.9%) in the testing dataset. The diagnostic performance of novice endoscopists was comparable to those of expert endoscopists with the DCNN model’s assistance (accuracy: 84.6% vs. 85.5%, sensitivity: 85.7% vs. 87.4%, specificity: 83.3% vs. 83.0%). The mean pairwise kappa value of endoscopists was increased significantly with the DCNN model’s assistance (0.430–0.629 vs. 0.660–0.861). The diagnostic duration reduced considerably with the assistance of the DCNN model from 4.35s to 3.01s. The correlation between the perseverance of effort and diagnostic accuracy of endoscopists was diminished using the DCNN model (r: 0.470 vs. 0.076). Conclusions An AI-assisted system was established and found useful for novice endoscopists to achieve comparable diagnostic performance with experts.
Collapse
Affiliation(s)
- Dehua Tang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, China
| | - Jie Zhou
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.,Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.,Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lei Wang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, China
| | - Muhan Ni
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, China
| | - Min Chen
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, China
| | - Shahzeb Hassan
- Northwestern University Feinberg School of Medicine, Chicago, IL, United States
| | - Renquan Luo
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.,Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.,Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xi Chen
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.,Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.,Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xinqi He
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.,Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.,Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lihui Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.,Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.,Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiwei Ding
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, China
| | - Honggang Yu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.,Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.,Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Guifang Xu
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, China
| | - Xiaoping Zou
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, China
| |
Collapse
|
235
|
A review on recent advancements in diagnosis and classification of cancers using artificial intelligence. Biomedicine (Taipei) 2021; 10:5-17. [PMID: 33854922 PMCID: PMC7721470 DOI: 10.37796/2211-8039.1012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Accepted: 06/16/2020] [Indexed: 12/09/2022] Open
Abstract
Artificial intelligence has illustrated drastic changes in radiology and medical imaging techniques which in turn led to tremendous changes in screening patterns. In particular, advancements in these techniques led to the development of computer aided detection (CAD) strategy. These approaches provided highly accurate diagnostic reports which served as a "second-opinion" to the radiologists. However, with significant advancements in artificial intelligence strategy, the diagnostic and classifying capabilities of CAD system are meeting the levels of radiologists and clinicians. Thus, it shifts the CAD system from second opinion approach to a high utility tool. This article reviews the strategies and algorithms developed using artificial intelligence for the foremost cancer diagnosis and classification which overcomes the challenges in the traditional method. In addition, the possible direction of AI in medical aspects is also discussed in this study.
Collapse
|
236
|
Kim T, Kim J, Choi HS, Kim ES, Keum B, Jeen YT, Lee HS, Chun HJ, Han SY, Kim DU, Kwon S, Choo J, Lee JM. Artificial intelligence-assisted analysis of endoscopic retrograde cholangiopancreatography image for identifying ampulla and difficulty of selective cannulation. Sci Rep 2021; 11:8381. [PMID: 33863970 PMCID: PMC8052314 DOI: 10.1038/s41598-021-87737-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 03/17/2021] [Indexed: 12/21/2022] Open
Abstract
The advancement of artificial intelligence (AI) has facilitated its application in medical fields. However, there has been little research for AI-assisted endoscopy, despite the clinical significance of the efficiency and safety of cannulation in the endoscopic retrograde cholangiopancreatography (ERCP). In this study, we aim to assist endoscopists performing ERCP through automatic detection of the ampulla and the identification of cannulation difficulty. We developed a novel AI-assisted system based on convolutional neural networks that predict the location of the ampulla and the difficulty of cannulation to the ampulla. ERCP data of 531 and 451 patients were utilized in the evaluation of our model for each task. Our model detected the ampulla with mean intersection-over-union 64.1%, precision 76.2%, recall 78.4%, and centroid distance 0.021. In classifying the cannulation difficulty, it achieved the recall of 71.9% for the class of easy cases and that of 61.1% for that of difficult cases. Remarkably, our model accurately detected AOV with varying morphological shape, size, and texture on par with the level of a human expert and showed promising results for recognizing cannulation difficulty. It demonstrated its potential to improve the quality of ERCP by assisting endoscopists.
Collapse
Affiliation(s)
- Taesung Kim
- Graduate School of Artificial Intelligence, KAIST, Daehak-ro 291, Yuseong-gu, Daejeon, 34141, Korea
| | - Jinhee Kim
- Graduate School of Artificial Intelligence, KAIST, Daehak-ro 291, Yuseong-gu, Daejeon, 34141, Korea
| | - Hyuk Soon Choi
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea
| | - Eun Sun Kim
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea
| | - Bora Keum
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea
| | - Yoon Tae Jeen
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea
| | - Hong Sik Lee
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea
| | - Hoon Jai Chun
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea
| | - Sung Yong Han
- Department of Internal Medicine, Pusan National University College of Medicine, Pusan, Korea
| | - Dong Uk Kim
- Department of Internal Medicine, Pusan National University College of Medicine, Pusan, Korea
| | - Soonwook Kwon
- Department of Anatomy, Catholic University of Daegu, Daegu, Korea
| | - Jaegul Choo
- Graduate School of Artificial Intelligence, KAIST, Daehak-ro 291, Yuseong-gu, Daejeon, 34141, Korea.
| | - Jae Min Lee
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University College of Medicine, Korea University Medical Center, Goryeodae-ro 73, Seongbuk-gu, Seoul, 02841, Korea.
| |
Collapse
|
237
|
Pang X, Zhao Z, Weng Y. The Role and Impact of Deep Learning Methods in Computer-Aided Diagnosis Using Gastrointestinal Endoscopy. Diagnostics (Basel) 2021; 11:694. [PMID: 33919669 PMCID: PMC8069844 DOI: 10.3390/diagnostics11040694] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 03/24/2021] [Accepted: 04/01/2021] [Indexed: 12/18/2022] Open
Abstract
At present, the application of artificial intelligence (AI) based on deep learning in the medical field has become more extensive and suitable for clinical practice compared with traditional machine learning. The application of traditional machine learning approaches to clinical practice is very challenging because medical data are usually uncharacteristic. However, deep learning methods with self-learning abilities can effectively make use of excellent computing abilities to learn intricate and abstract features. Thus, they are promising for the classification and detection of lesions through gastrointestinal endoscopy using a computer-aided diagnosis (CAD) system based on deep learning. This study aimed to address the research development of a CAD system based on deep learning in order to assist doctors in classifying and detecting lesions in the stomach, intestines, and esophagus. It also summarized the limitations of the current methods and finally presented a prospect for future research.
Collapse
Affiliation(s)
- Xuejiao Pang
- School of Control Science and Engineering, Shandong University, Jinan 250061, China;
| | - Zijian Zhao
- School of Control Science and Engineering, Shandong University, Jinan 250061, China;
| | - Ying Weng
- School of Computer Science, University of Nottingham, Nottingham NG7 2RD, UK;
| |
Collapse
|
238
|
Lee J, Wallace MB. State of the Art: The Impact of Artificial Intelligence in Endoscopy 2020. Curr Gastroenterol Rep 2021; 23:7. [PMID: 33855659 DOI: 10.1007/s11894-021-00810-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/16/2020] [Indexed: 12/13/2022]
Abstract
PURPOSE OF REVIEW Recently numerous researchers have shown remarkable progress using convolutional neural network-based artificial intelligence (AI) for endoscopy. In this manuscript we aim to summarize recent AI impact on endoscopy. RECENT FINDINGS AI for detecting colon polyps has been the most promising area for application of AI in endoscopy. Recent prospective randomized studies showed that AI assisted colonoscopy increased adenoma detection rate and the mean number of adenomas per patient compared to standard colonoscopy alone. AI for optical biopsy of colon polyp showed a negative predictive value of ≥90%. For capsule endoscopy, applying AI to pre-read the video images decreased physician reading time significantly. Recently, researchers are broadening the area of AI to quality assessment of endoscopy such as bowel preparation and automated report generation. AI systems have shown great potential to increase physician performance by enhancing detection, reducing procedure time, and providing real-time feedback of endoscopy quality. To build a generally applicable AI, we need further investigations in real world settings and also integration of AI tools into pragmatic platforms.
Collapse
Affiliation(s)
- Jiyoung Lee
- Division of Gastroenterology and Hepatology, Endoscopy Unit, Mayo Clinic Jacksonville, 4500 San Pablo Road, Jacksonville, FL, 32224, USA.,Health Screening and Promotion Center, Asan Medical Center, Seoul, South Korea
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Endoscopy Unit, Mayo Clinic Jacksonville, 4500 San Pablo Road, Jacksonville, FL, 32224, USA. .,Center of Research in Computer Vision, University of Central Florida, Orlando, FL, USA.
| |
Collapse
|
239
|
Ability of artificial intelligence to detect T1 esophageal squamous cell carcinoma from endoscopic videos and the effects of real-time assistance. Sci Rep 2021; 11:7759. [PMID: 33833355 PMCID: PMC8032773 DOI: 10.1038/s41598-021-87405-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 03/26/2021] [Indexed: 12/28/2022] Open
Abstract
Diagnosis using artificial intelligence (AI) with deep learning could be useful in endoscopic examinations. We investigated the ability of AI to detect superficial esophageal squamous cell carcinoma (ESCC) from esophagogastroduodenoscopy (EGD) videos. We retrospectively collected 8428 EGD images of esophageal cancer to develop a convolutional neural network through deep learning. We evaluated the detection accuracy of the AI diagnosing system compared with that of 18 endoscopists. We used 144 EGD videos for the two validation sets. First, we used 64 EGD observation videos of ESCCs using both white light imaging (WLI) and narrow-band imaging (NBI). We then evaluated the system using 80 EGD videos from 40 patients (20 with superficial ESCC and 20 with non-ESCC). In the first set, the AI system correctly diagnosed 100% ESCCs. In the second set, it correctly detected 85% (17/20) ESCCs. Of these, 75% (15/20) and 55% (11/22) were detected by WLI and NBI, respectively, and the positive predictive value was 36.7%. The endoscopists correctly detected 45% (25–70%) ESCCs. With AI real-time assistance, the sensitivities of the endoscopists were significantly improved without AI assistance (p < 0.05). AI can detect superficial ESCCs from EGD videos with high sensitivity and the sensitivity of the endoscopist was improved with AI real-time support.
Collapse
|
240
|
Miwa T, Minoda R, Yamaguchi T, Kita SI, Osaka K, Takeda H, Kanemaru SI, Omori K. Application of artificial intelligence using a convolutional neural network for detecting cholesteatoma in endoscopic enhanced images. Auris Nasus Larynx 2021; 49:11-17. [PMID: 33824034 DOI: 10.1016/j.anl.2021.03.018] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 03/10/2021] [Accepted: 03/19/2021] [Indexed: 01/10/2023]
Abstract
OBJECTIVE We examined whether artificial intelligence (AI) used with the novel digital image enhancement system modalities (CLARA+CHROMA, SPECTRA A, and SPECTRA B) could distinguish the cholesteatoma matrix, cholesteatoma debris, and normal middle ear mucosa, and observe the middle ear cavity during middle ear cholesteatoma surgery. METHODS A convolutional neural network (CNN) was trained with a set of images chosen by an otologist. To evaluate the diagnostic accuracy of the constructed CNN, an independent test data set of middle ear images was collected from 14 consecutive patients with 26 cholesteatoma matrix lesions, who underwent transcanal endoscopic ear surgery at a single hospital from August 2018 to September 2019. The final test data set included 58 total images, with 1‒5 images from each modality for each case. RESULTS The CNN required only 10 s to analyze more than 58 test images. Using SPECTRA A and SPECTRA B, the CNN correctly diagnosed 15 and 15 of 26 cholesteatoma matrix lesions, with a sensitivity of 34.6% and 42.3%, and with a specificity of 81.3% and 87.5%, respectively. CONCLUSION Our preliminary study revealed that AI and novel imaging modalities are potentially useful tools for identifying and visualizing the cholesteatoma matrix during endoscopic ear surgery. The diagnostic ability of the CNN is not yet appropriate for implementation in daily clinical practice, based on our study findings. However, in the future, these techniques and AI tools could help to reduce the burden on surgeons and will facilitate telemedicine in remote and rural areas, as well as in developing countries where the number of surgeons is limited.
Collapse
Affiliation(s)
- Toru Miwa
- Department of Otolaryngology-Head and Neck Surgery, Kitano Hospital, Tazuke Kofukai Medical Research Institute, Osaka, Japan; Department of Otolaryngology-Head and Neck Surgery, Kyoto University, Kyoto, Japan; Otolaryngology-Head and Neck Surgery, JCHO Kumamoto General Hospital, Yatsushiro, Japan.
| | - Ryosei Minoda
- Otolaryngology-Head and Neck Surgery, JCHO Kumamoto General Hospital, Yatsushiro, Japan
| | - Tomoya Yamaguchi
- Department of Otolaryngology-Head and Neck Surgery, Kitano Hospital, Tazuke Kofukai Medical Research Institute, Osaka, Japan
| | - Shin-Ichiro Kita
- Department of Otolaryngology-Head and Neck Surgery, Kitano Hospital, Tazuke Kofukai Medical Research Institute, Osaka, Japan
| | - Kazuto Osaka
- Department of Otolaryngology-Head and Neck Surgery, Kitano Hospital, Tazuke Kofukai Medical Research Institute, Osaka, Japan
| | - Hiroki Takeda
- Department of Otolaryngology-Head and Neck Surgery, Kumamoto University, Kumamoto, Japan
| | - Shin-Ichi Kanemaru
- Department of Otolaryngology-Head and Neck Surgery, Kitano Hospital, Tazuke Kofukai Medical Research Institute, Osaka, Japan
| | - Koichi Omori
- Department of Otolaryngology-Head and Neck Surgery, Kyoto University, Kyoto, Japan
| |
Collapse
|
241
|
Takahashi Y, Sone K, Noda K, Yoshida K, Toyohara Y, Kato K, Inoue F, Kukita A, Taguchi A, Nishida H, Miyamoto Y, Tanikawa M, Tsuruga T, Iriyama T, Nagasaka K, Matsumoto Y, Hirota Y, Hiraike-Wada O, Oda K, Maruyama M, Osuga Y, Fujii T. Automated system for diagnosing endometrial cancer by adopting deep-learning technology in hysteroscopy. PLoS One 2021; 16:e0248526. [PMID: 33788887 PMCID: PMC8011803 DOI: 10.1371/journal.pone.0248526] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Accepted: 02/27/2021] [Indexed: 02/07/2023] Open
Abstract
Endometrial cancer is a ubiquitous gynecological disease with increasing global incidence. Therefore, despite the lack of an established screening technique to date, early diagnosis of endometrial cancer assumes critical importance. This paper presents an artificial-intelligence-based system to detect the regions affected by endometrial cancer automatically from hysteroscopic images. In this study, 177 patients (60 with normal endometrium, 21 with uterine myoma, 60 with endometrial polyp, 15 with atypical endometrial hyperplasia, and 21 with endometrial cancer) with a history of hysteroscopy were recruited. Machine-learning techniques based on three popular deep neural network models were employed, and a continuity-analysis method was developed to enhance the accuracy of cancer diagnosis. Finally, we investigated if the accuracy could be improved by combining all the trained models. The results reveal that the diagnosis accuracy was approximately 80% (78.91–80.93%) when using the standard method, and it increased to 89% (83.94–89.13%) and exceeded 90% (i.e., 90.29%) when employing the proposed continuity analysis and combining the three neural networks, respectively. The corresponding sensitivity and specificity equaled 91.66% and 89.36%, respectively. These findings demonstrate the proposed method to be sufficient to facilitate timely diagnosis of endometrial cancer in the near future.
Collapse
Affiliation(s)
- Yu Takahashi
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kenbun Sone
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- * E-mail:
| | | | | | - Yusuke Toyohara
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kosuke Kato
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Futaba Inoue
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Asako Kukita
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ayumi Taguchi
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Haruka Nishida
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yuichiro Miyamoto
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Michihiro Tanikawa
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tetsushi Tsuruga
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Takayuki Iriyama
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kazunori Nagasaka
- Department of Obstetrics and Gynecology, Teikyo University School of Medicine, Tokyo, Japan
| | - Yoko Matsumoto
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yasushi Hirota
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Osamu Hiraike-Wada
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Katsutoshi Oda
- Division of Integrative Genomics, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | | | - Yutaka Osuga
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tomoyuki Fujii
- Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
242
|
Development and Application of Artificial Intelligence in Auxiliary TCM Diagnosis. EVIDENCE-BASED COMPLEMENTARY AND ALTERNATIVE MEDICINE 2021; 2021:6656053. [PMID: 33763147 PMCID: PMC7955861 DOI: 10.1155/2021/6656053] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Revised: 02/10/2021] [Accepted: 02/24/2021] [Indexed: 01/10/2023]
Abstract
As an emerging comprehensive discipline, artificial intelligence (AI) has been widely applied in various fields, including traditional Chinese medicine (TCM), a treasure of the Chinese nation. Realizing the organic combination of AI and TCM can promote the inheritance and development of TCM. The paper summarizes the development and application of AI in auxiliary TCM diagnosis, analyzes the bottleneck of artificial intelligence in the field of auxiliary TCM diagnosis at present, and proposes a possible future direction of its development.
Collapse
|
243
|
Kulikajevas A, Maskeliunas R, Damaševičius R. Detection of sitting posture using hierarchical image composition and deep learning. PeerJ Comput Sci 2021; 7:e442. [PMID: 33834109 PMCID: PMC8022631 DOI: 10.7717/peerj-cs.442] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 02/24/2021] [Indexed: 06/12/2023]
Abstract
Human posture detection allows the capture of the kinematic parameters of the human body, which is important for many applications, such as assisted living, healthcare, physical exercising and rehabilitation. This task can greatly benefit from recent development in deep learning and computer vision. In this paper, we propose a novel deep recurrent hierarchical network (DRHN) model based on MobileNetV2 that allows for greater flexibility by reducing or eliminating posture detection problems related to a limited visibility human torso in the frame, i.e., the occlusion problem. The DRHN network accepts the RGB-Depth frame sequences and produces a representation of semantically related posture states. We achieved 91.47% accuracy at 10 fps rate for sitting posture recognition.
Collapse
Affiliation(s)
- Audrius Kulikajevas
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
| | - Rytis Maskeliunas
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, Kaunas, Lithuania
- Faculty of Applied Mathematics, Silesian University of Technology, Gliwice, Poland
| |
Collapse
|
244
|
Ahmad Z, Rahim S, Zubair M, Abdul-Ghafar J. Artificial intelligence (AI) in medicine, current applications and future role with special emphasis on its potential and promise in pathology: present and future impact, obstacles including costs and acceptance among pathologists, practical and philosophical considerations. A comprehensive review. Diagn Pathol 2021; 16:24. [PMID: 33731170 PMCID: PMC7971952 DOI: 10.1186/s13000-021-01085-4] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 03/04/2021] [Indexed: 02/08/2023] Open
Abstract
BACKGROUND The role of Artificial intelligence (AI) which is defined as the ability of computers to perform tasks that normally require human intelligence is constantly expanding. Medicine was slow to embrace AI. However, the role of AI in medicine is rapidly expanding and promises to revolutionize patient care in the coming years. In addition, it has the ability to democratize high level medical care and make it accessible to all parts of the world. MAIN TEXT Among specialties of medicine, some like radiology were relatively quick to adopt AI whereas others especially pathology (and surgical pathology in particular) are only just beginning to utilize AI. AI promises to play a major role in accurate diagnosis, prognosis and treatment of cancers. In this paper, the general principles of AI are defined first followed by a detailed discussion of its current role in medicine. In the second half of this comprehensive review, the current and future role of AI in surgical pathology is discussed in detail including an account of the practical difficulties involved and the fear of pathologists of being replaced by computer algorithms. A number of recent studies which demonstrate the usefulness of AI in the practice of surgical pathology are highlighted. CONCLUSION AI has the potential to transform the practice of surgical pathology by ensuring rapid and accurate results and enabling pathologists to focus on higher level diagnostic and consultative tasks such as integrating molecular, morphologic and clinical information to make accurate diagnosis in difficult cases, determine prognosis objectively and in this way contribute to personalized care.
Collapse
Affiliation(s)
- Zubair Ahmad
- Department of Pathology and Laboratory Medicine, Aga Khan University Hospital, Karachi, Pakistan
| | - Shabina Rahim
- Department of Pathology and Laboratory Medicine, Aga Khan University Hospital, Karachi, Pakistan
| | - Maha Zubair
- Department of Pathology and Laboratory Medicine, Aga Khan University Hospital, Karachi, Pakistan
| | - Jamshid Abdul-Ghafar
- Department of Pathology and Clinical Laboratory, French Medical Institute for Mothers and Children (FMIC), Kabul, Afghanistan.
| |
Collapse
|
245
|
Jiang K, Jiang X, Pan J, Wen Y, Huang Y, Weng S, Lan S, Nie K, Zheng Z, Ji S, Liu P, Li P, Liu F. Current Evidence and Future Perspective of Accuracy of Artificial Intelligence Application for Early Gastric Cancer Diagnosis With Endoscopy: A Systematic and Meta-Analysis. Front Med (Lausanne) 2021; 8:629080. [PMID: 33791323 PMCID: PMC8005567 DOI: 10.3389/fmed.2021.629080] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 01/20/2021] [Indexed: 12/11/2022] Open
Abstract
Background & Aims: Gastric cancer is the common malignancies from cancer worldwide. Endoscopy is currently the most effective method to detect early gastric cancer (EGC). However, endoscopy is not infallible and EGC can be missed during endoscopy. Artificial intelligence (AI)-assisted endoscopic diagnosis is a recent hot spot of research. We aimed to quantify the diagnostic value of AI-assisted endoscopy in diagnosing EGC. Method: The PubMed, MEDLINE, Embase and the Cochrane Library Databases were searched for articles on AI-assisted endoscopy application in EGC diagnosis. The pooled sensitivity, specificity, and area under the curve (AUC) were calculated, and the endoscopists' diagnostic value was evaluated for comparison. The subgroup was set according to endoscopy modality, and number of training images. A funnel plot was delineated to estimate the publication bias. Result: 16 studies were included in this study. We indicated that the application of AI in endoscopic detection of EGC achieved an AUC of 0.96 (95% CI, 0.94–0.97), a sensitivity of 86% (95% CI, 77–92%), and a specificity of 93% (95% CI, 89–96%). In AI-assisted EGC depth diagnosis, the AUC was 0.82(95% CI, 0.78–0.85), and the pooled sensitivity and specificity was 0.72(95% CI, 0.58–0.82) and 0.79(95% CI, 0.56–0.92). The funnel plot showed no publication bias. Conclusion: The AI applications for EGC diagnosis seemed to be more accurate than the endoscopists. AI assisted EGC diagnosis was more accurate than experts. More prospective studies are needed to make AI-aided EGC diagnosis universal in clinical practice.
Collapse
Affiliation(s)
- Kailin Jiang
- First College of Clinic Medicine, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiaotao Jiang
- First College of Clinic Medicine, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Jinglin Pan
- Department of Spleen-Stomach and Liver Diseases, Traditional Chinese Medicine Hospital of Hainan Province Affiliated to Guangzhou University of Chinese Medicine, Haikou, China
| | - Yi Wen
- Department of Gastroenterology, First Affiliation Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yuanchen Huang
- First College of Clinic Medicine, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Senhui Weng
- First College of Clinic Medicine, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Shaoyang Lan
- Department of Gastroenterology, First Affiliation Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Kechao Nie
- First College of Clinic Medicine, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Zhihua Zheng
- First College of Clinic Medicine, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Shuling Ji
- First College of Clinic Medicine, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Peng Liu
- First College of Clinic Medicine, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Peiwu Li
- Department of Gastroenterology, First Affiliation Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Fengbin Liu
- Department of Gastroenterology, First Affiliation Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| |
Collapse
|
246
|
Yang R, Yu Y. Artificial Convolutional Neural Network in Object Detection and Semantic Segmentation for Medical Imaging Analysis. Front Oncol 2021; 11:638182. [PMID: 33768000 PMCID: PMC7986719 DOI: 10.3389/fonc.2021.638182] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2020] [Accepted: 02/11/2021] [Indexed: 12/18/2022] Open
Abstract
In the era of digital medicine, a vast number of medical images are produced every day. There is a great demand for intelligent equipment for adjuvant diagnosis to assist medical doctors with different disciplines. With the development of artificial intelligence, the algorithms of convolutional neural network (CNN) progressed rapidly. CNN and its extension algorithms play important roles on medical imaging classification, object detection, and semantic segmentation. While medical imaging classification has been widely reported, the object detection and semantic segmentation of imaging are rarely described. In this review article, we introduce the progression of object detection and semantic segmentation in medical imaging study. We also discuss how to accurately define the location and boundary of diseases.
Collapse
Affiliation(s)
| | - Yingyan Yu
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
247
|
Kleppe A, Skrede OJ, De Raedt S, Liestøl K, Kerr DJ, Danielsen HE. Designing deep learning studies in cancer diagnostics. Nat Rev Cancer 2021; 21:199-211. [PMID: 33514930 DOI: 10.1038/s41568-020-00327-9] [Citation(s) in RCA: 127] [Impact Index Per Article: 42.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/09/2020] [Indexed: 12/16/2022]
Abstract
The number of publications on deep learning for cancer diagnostics is rapidly increasing, and systems are frequently claimed to perform comparable with or better than clinicians. However, few systems have yet demonstrated real-world medical utility. In this Perspective, we discuss reasons for the moderate progress and describe remedies designed to facilitate transition to the clinic. Recent, presumably influential, deep learning studies in cancer diagnostics, of which the vast majority used images as input to the system, are evaluated to reveal the status of the field. By manipulating real data, we then exemplify that much and varied training data facilitate the generalizability of neural networks and thus the ability to use them clinically. To reduce the risk of biased performance estimation of deep learning systems, we advocate evaluation in external cohorts and strongly advise that the planned analyses, including a predefined primary analysis, are described in a protocol preferentially stored in an online repository. Recommended protocol items should be established for the field, and we present our suggestions.
Collapse
Affiliation(s)
- Andreas Kleppe
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Ole-Johan Skrede
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Sepp De Raedt
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Knut Liestøl
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - David J Kerr
- Nuffield Division of Clinical Laboratory Sciences, University of Oxford, Oxford, UK
| | - Håvard E Danielsen
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway.
- Department of Informatics, University of Oslo, Oslo, Norway.
- Nuffield Division of Clinical Laboratory Sciences, University of Oxford, Oxford, UK.
| |
Collapse
|
248
|
Larentzakis A, Lygeros N. Artificial intelligence (AI) in medicine as a strategic valuable tool. Pan Afr Med J 2021; 38:184. [PMID: 33995790 PMCID: PMC8106796 DOI: 10.11604/pamj.2021.38.184.28197] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 02/05/2021] [Indexed: 12/25/2022] Open
Abstract
Humans' creativity led to machines that outperform human capabilities in terms of workload, effectiveness, precision, endurance, strength, and repetitiveness. It has always been a vision and a way to transcend the existence and to give more sense to life, which is precious. The common denominator of all these creations was that they were meant to replace, enhance or go beyond the mechanical capabilities of the human body. The story takes another bifurcation when Alan Turing introduced the concept of a machine that could think, in 1950. Artificial intelligence, presented as a term in 1956, describes the use of computers to imitate intelligence and critical thinking comparable to humans. However, the revolution began in 1943, when artificial neural networks was an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. Artificial intelligence is becoming a research focus and a tool of strategic value. The same observations apply in the field of healthcare, too. In this manuscript, we try to address key questions regarding artificial intelligence in medicine, such as what artificial intelligence is and how it works, what is its value in terms of application in medicine, and what are the prospects?
Collapse
Affiliation(s)
- Andreas Larentzakis
- First Department of Propaedeutic Surgery, Athens Medical School, National and Kapodistrian University of Athens, Hippocration General Athens Hospital, Athens, Greece
| | - Nik Lygeros
- Laboratoire de Génie des Procédés Catalytiques, Centre National de la Recherche Scientifique/École Supérieure de Chimie Physique Électronique, Lyon, France
| |
Collapse
|
249
|
Sutton RA, Sharma P. Overcoming barriers to implementation of artificial intelligence in gastroenterology. Best Pract Res Clin Gastroenterol 2021; 52-53:101732. [PMID: 34172254 DOI: 10.1016/j.bpg.2021.101732] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 02/08/2021] [Indexed: 01/31/2023]
Abstract
Artificial intelligence is poised to revolutionize the field of medicine, however significant questions must be answered prior to its implementation on a regular basis. Many artificial intelligence algorithms remain limited by isolated datasets which may cause selection bias and truncated learning for the program. While a central database may solve this issue, several barriers such as security, patient consent, and management structure prevent this from being implemented. An additional barrier to daily use is device approval by the Food and Drug Administration. In order for this to occur, clinical studies must address new endpoints, including and beyond the traditional bio- and medical statistics. These must showcase artificial intelligence's benefit and answer key questions, including challenges posed in the field of medical ethics.
Collapse
Affiliation(s)
- Richard A Sutton
- University of Kansas Medical Center 3901 Rainbow Blvd, Kansas City, KS, USA; Kansas City Veteran's Affairs Medical Center 4801 Linwood Blvd, Kansas City, MO, USA.
| | - Prateek Sharma
- University of Kansas Medical Center 3901 Rainbow Blvd, Kansas City, KS, USA; Kansas City Veteran's Affairs Medical Center 4801 Linwood Blvd, Kansas City, MO, USA.
| |
Collapse
|
250
|
Wavelet Transform and Deep Convolutional Neural Network-Based Smart Healthcare System for Gastrointestinal Disease Detection. Interdiscip Sci 2021; 13:212-228. [PMID: 33566337 DOI: 10.1007/s12539-021-00417-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 01/16/2021] [Accepted: 01/23/2021] [Indexed: 12/19/2022]
Abstract
This work presents a smart healthcare system for the detection of various abnormalities present in the gastrointestinal (GI) region with the help of time-frequency analysis and convolutional neural network. In this regard, the KVASIR V2 dataset comprising of eight classes of GI-tract images such as Normal cecum, Normal pylorus, Normal Z-line, Esophagitis, Polyps, Ulcerative Colitis, Dyed and lifted polyp, and Dyed resection margins are used for training and validation. The initial phase of the work involves an image pre-processing step, followed by the extraction of approximate discrete wavelet transform coefficients. Each class of decomposed images is later given as input to a couple of considered convolutional neural network (CNN) models for training and testing in two different classification levels to recognize its predicted value. Afterward, the classification performance is measured through the following measuring indices: accuracy, precision, recall, specificity, and F1 score. The experimental result shows 97.25% and 93.75% of accuracy in the first level and second level of classification, respectively. Lastly, a comparative performance analysis is carried out with several other previously published works on a similar dataset where the proposed approach performs better than its contemporary methods.
Collapse
|