301
|
Chahal D, Byrne MF. A primer on artificial intelligence and its application to endoscopy. Gastrointest Endosc 2020; 92:813-820.e4. [PMID: 32387497 DOI: 10.1016/j.gie.2020.04.074] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Accepted: 04/28/2020] [Indexed: 12/11/2022]
Abstract
Artificial intelligence (AI) has emerged as a powerful and exciting new technology poised to impact many aspects of health care. In endoscopy, AI is now being used to detect and characterize benign and malignant GI lesions and assess malignant lesion depth of invasion. It will undoubtedly also find use in capsule endoscopy and inflammatory bowel disease. Herein, we provide the general endoscopist with a brief overview of AI and its emerging uses in our field. We also touch on the challenges of incorporating AI into clinical practice, such as workflow integration, data storage, and data privacy.
Collapse
Affiliation(s)
- Daljeet Chahal
- Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Michael F Byrne
- Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada; Satisfai Health and AI4GI joint venture, Vancouver, British Columbia, Canada
| |
Collapse
|
302
|
Horiuchi Y, Hirasawa T, Ishizuka N, Tokai Y, Namikawa K, Yoshimizu S, Ishiyama A, Yoshio T, Tsuchida T, Fujisaki J, Tada T. Performance of a computer-aided diagnosis system in diagnosing early gastric cancer using magnifying endoscopy videos with narrow-band imaging (with videos). Gastrointest Endosc 2020; 92:856-865.e1. [PMID: 32422155 DOI: 10.1016/j.gie.2020.04.079] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 04/24/2020] [Indexed: 12/24/2022]
Abstract
BACKGROUND AND AIMS The performance of magnifying endoscopy with narrow-band imaging (ME-NBI) using a computer-aided diagnosis (CAD) system in diagnosing early gastric cancer (EGC) is unclear. Here, we aimed to clarify the differences in the diagnostic performance between expert endoscopists and the CAD system using ME-NBI. METHODS The CAD system was pretrained using 1492 cancerous and 1078 noncancerous images obtained using ME-NBI. One hundred seventy-four videos (87 cancerous and 87 noncancerous videos) were used to evaluate the diagnostic performance of the CAD system using the area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). For each item, comparisons were made between the CAD system and 11 experts who were skilled in diagnosing EGC using ME-NBI with clinical experience of more than 1 year at our hospital. RESULTS The CAD system demonstrated an AUC of 0.8684. The accuracy, sensitivity, specificity, PPV, and NPV were 85.1% (95% confidence interval [95% CI], 79.0-89.6), 87.4% (95% CI, 78.8-92.8), 82.8% (95% CI, 73.5-89.3), 83.5% (95% CI, 74.6-89.7), and 86.7% (95% CI, 77.8-92.4), respectively. The CAD system was significantly more accurate than 2 experts, significantly less accurate than 1 expert, and not significantly different from the remaining 8 experts. CONCLUSIONS The overall performance of the CAD system using ME-NBI videos in diagnosing EGC was considered good and was equivalent to or better than that of several experts. The CAD system may prove useful in the diagnosis of EGC in clinical practice.
Collapse
Affiliation(s)
- Yusuke Horiuchi
- Department of Gastroenterology, Cancer Institute Hospital, Tokyo, Japan
| | - Toshiaki Hirasawa
- Department of Gastroenterology, Cancer Institute Hospital, Tokyo, Japan
| | - Naoki Ishizuka
- Department of Clinical Trial Planning and Management, Cancer Institute Hospital, Tokyo, Japan
| | - Yoshitaka Tokai
- Department of Gastroenterology, Cancer Institute Hospital, Tokyo, Japan
| | - Ken Namikawa
- Department of Gastroenterology, Cancer Institute Hospital, Tokyo, Japan
| | - Shoichi Yoshimizu
- Department of Gastroenterology, Cancer Institute Hospital, Tokyo, Japan
| | - Akiyoshi Ishiyama
- Department of Gastroenterology, Cancer Institute Hospital, Tokyo, Japan
| | - Toshiyuki Yoshio
- Department of Gastroenterology, Cancer Institute Hospital, Tokyo, Japan
| | - Tomohiro Tsuchida
- Department of Gastroenterology, Cancer Institute Hospital, Tokyo, Japan
| | - Junko Fujisaki
- Department of Gastroenterology, Cancer Institute Hospital, Tokyo, Japan
| | - Tomohiro Tada
- AI Medical Service Inc., Tokyo, Japan; Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
| |
Collapse
|
303
|
Application of A Convolutional Neural Network in The Diagnosis of Gastric Mesenchymal Tumors on Endoscopic Ultrasonography Images. J Clin Med 2020; 9:jcm9103162. [PMID: 33003602 PMCID: PMC7600226 DOI: 10.3390/jcm9103162] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 09/18/2020] [Accepted: 09/27/2020] [Indexed: 12/14/2022] Open
Abstract
Background and Aims: Endoscopic ultrasonography (EUS) is a useful diagnostic modality for evaluating gastric mesenchymal tumors; however, differentiating gastrointestinal stromal tumors (GISTs) from benign mesenchymal tumors such as leiomyomas and schwannomas remains challenging. For this reason, we developed a convolutional neural network computer-aided diagnosis (CNN-CAD) system that can analyze gastric mesenchymal tumors on EUS images. Methods: A total of 905 EUS images of gastric mesenchymal tumors (pathologically confirmed GIST, leiomyoma, and schwannoma) were used as a training dataset. Validation was performed using 212 EUS images of gastric mesenchymal tumors. This test dataset was interpreted by three experienced and three junior endoscopists. Results: The sensitivity, specificity, and accuracy of the CNN-CAD system for differentiating GISTs from non-GIST tumors were 83.0%, 75.5%, and 79.2%, respectively. Its diagnostic specificity and accuracy were significantly higher than those of two experienced and one junior endoscopists. In the further sequential analysis to differentiate leiomyoma from schwannoma in non-GIST tumors, the final diagnostic accuracy of the CNN-CAD system was 75.5%, which was significantly higher than that of two experienced and one junior endoscopists. Conclusions: Our CNN-CAD system showed high accuracy in diagnosing gastric mesenchymal tumors on EUS images. It may complement the current clinical practices in the EUS diagnosis of gastric mesenchymal tumors.
Collapse
|
304
|
Frontiers of Robotic Gastroscopy: A Comprehensive Review of Robotic Gastroscopes and Technologies. Cancers (Basel) 2020; 12:cancers12102775. [PMID: 32998213 PMCID: PMC7600666 DOI: 10.3390/cancers12102775] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 09/22/2020] [Accepted: 09/25/2020] [Indexed: 02/06/2023] Open
Abstract
Simple Summary With the rapid advancements of medical technologies and patients’ higher expectations for precision diagnostic and surgical outcomes, gastroscopy has been increasingly adopted for the detection and treatment of pathologies in the upper digestive tract. Correspondingly, robotic gastroscopes with advanced functionalities, e.g., disposable, dextrous and not invasive solutions, have been developed in the last years. This article extensively reviews these novel devices and describes their functionalities and performance. In addition, the implementation of artificial intelligence technology into robotic gastroscopes, combined with remote telehealth endoscopy services, are discussed. The aim of this paper is to provide a clear and comprehensive view of contemporary robotic gastroscopes and ancillary technologies to support medical practitioners in their future clinical practice but also to inspire and drive new engineering developments. Abstract Upper gastrointestinal (UGI) tract pathology is common worldwide. With recent advancements in robotics, innovative diagnostic and treatment devices have been developed and several translational attempts made. This review paper aims to provide a highly pictorial critical review of robotic gastroscopes, so that clinicians and researchers can obtain a swift and comprehensive overview of key technologies and challenges. Therefore, the paper presents robotic gastroscopes, either commercial or at a progressed technology readiness level. Among them, we show tethered and wireless gastroscopes, as well as devices aimed for UGI surgery. The technological features of these instruments, as well as their clinical adoption and performance, are described and compared. Although the existing endoscopic devices have thus far provided substantial improvements in the effectiveness of diagnosis and treatment, there are certain aspects that represent unwavering predicaments of the current gastroenterology practice. A detailed list includes difficulties and risks, such as transmission of communicable diseases (e.g., COVID-19) due to the doctor–patient proximity, unchanged learning curves, variable detection rates, procedure-related adverse events, endoscopists’ and nurses’ burnouts, limited human and/or material resources, and patients’ preferences to choose non-invasive options that further interfere with the successful implementation and adoption of routine screening. The combination of robotics and artificial intelligence, as well as remote telehealth endoscopy services, are also discussed, as viable solutions to improve existing platforms for diagnosis and treatment are emerging.
Collapse
|
305
|
Abstract
Almost all East Asian strains and 60% of Western H. pylori strains are of cagA +. The infected patients develop a more pronounced inflammation with ulceration of stomach, and also are under a higher risk of development of cancer.Objective: to improve the informative value of dysplasia diagnosis by combining white light endoscopy with chromoscopy, supplemented by target brush biopsy with cytological examination.Methods and materials: for the period from 2016 to 2018, the study included 41 patients undergoing examination and treatment of chronic gastritis. The analyzed cases included 16 (39%) men and 25 (61%) women. The age of the patients ranged from 19 to 86 years. All patients underwent esophagogastroduodenoscopy, chromoendoscopy with 0.5% methylene blue, brush biopsy (scraping with a nylon brush). At least two brush preparations were obtained: body of the stomach, antrum, scraping was also made on the surface of erosions and areas of atypical structure of the epithelium. Brush preparations were sent for cytological examination. Results: esophagogastroduodenoscopy revealed erosions in 37 (90.2%) patients, in 6 cases (14.6%) among them spontaneous bleeding was determined. In 23 (56%) patients visual signs of atrophic gastritis were noted. Cylindrical epithelium of the intestinal type was revealed in 25 patients (61%) using methylene blue.The cytological examination of the brush preparation showed proliferation of the integumentary epithelium with signs of mild dysplasia in all cases, intestinal metaplasia was revealed in 27 patients (65.8%), H. Pylori was confirmed in 38 patients (92.6%).Conclusion: chromoscopy and brush biopsy are simple and affordable methods, and their integration into routine endoscopy increases the informative value of the study, namely, allows detection of precancerous lesions of mucosa.
Collapse
Affiliation(s)
- A. A. Arkhipova
- State Budgetary Healthcare Institution of the Novosibirsk Region "City Clinical Hospital No. 2"
| | - V. V. Anischenko
- "Novosibirsk State Medical University (NSMU) of the Ministry of Health of the Russian Federation
| |
Collapse
|
306
|
Zhang L, Dong D, Zhang W, Hao X, Fang M, Wang S, Li W, Liu Z, Wang R, Zhou J, Tian J. A deep learning risk prediction model for overall survival in patients with gastric cancer: A multicenter study. Radiother Oncol 2020; 150:73-80. [DOI: 10.1016/j.radonc.2020.06.010] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2020] [Revised: 06/08/2020] [Accepted: 06/09/2020] [Indexed: 02/07/2023]
|
307
|
Ullah M, Akbar A, Yannarelli G. Applications of artificial intelligence in, early detection of cancer, clinical diagnosis and personalized medicine. Artif Intell Cancer 2020; 1:39-44. [DOI: 10.35713/aic.v1.i2.39] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 08/24/2020] [Accepted: 08/31/2020] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) refers to the simulation of human intelligence in machines programmed to convert raw input data into decision-making actions, like humans. AI programs are designed to make decisions, often using deep learning and computer-guided programs that analyze and process raw data into clinical decision making for effective treatment. New techniques for predicting cancer at an early stage are needed as conventional methods have poor accuracy and are not applicable to personalized medicine. AI has the potential to use smart, intelligent computer systems for image interpretation and early diagnosis of cancer. AI has been changing almost all the areas of the medical field by integrating with new emerging technologies. AI has revolutionized the entire health care system through innovative digital diagnostics with greater precision and accuracy. AI is capable of detecting cancer at an early stage with accurate diagnosis and improved survival outcomes. AI is an innovative technology of the future that can be used for early prediction, diagnosis and treatment of cancer.
Collapse
Affiliation(s)
- Mujib Ullah
- Institute for Immunity, Transplantation, Stem Cell Biology and Regenerative Medicine, School of Medicine, Stanford University, Palo Alto, CA 94304, United States
- Molecular Medicine, Department of Radiology, School of Medicine, Stanford University, Palo Alto, CA 94304, United States
| | - Asma Akbar
- Institute for Immunity, Transplantation, Stem Cell Biology and Regenerative Medicine, School of Medicine, Stanford University, Palo Alto, CA 94304, United States
- Molecular Medicine, Department of Radiology, School of Medicine, Stanford University, Palo Alto, CA 94304, United States
| | - Gustavo Yannarelli
- Laboratorio de Regulación Génica y Células Madre, Instituto de Medicina Traslacional, Trasplante y Bioingeniería, Universidad Favaloro-CONICET, Buenos Aires 1078, Argentina
| |
Collapse
|
308
|
Shi XJ, Wei Y, Ji B. Systems Biology of Gastric Cancer: Perspectives on the Omics-Based Diagnosis and Treatment. Front Mol Biosci 2020; 7:203. [PMID: 33005629 PMCID: PMC7479200 DOI: 10.3389/fmolb.2020.00203] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 07/27/2020] [Indexed: 12/14/2022] Open
Abstract
Gastric cancer is the fifth most diagnosed cancer in the world, affecting more than a million people and causing nearly 783,000 deaths each year. The prognosis of advanced gastric cancer remains extremely poor despite the use of surgery and adjuvant therapy. Therefore, understanding the mechanism of gastric cancer development, and the discovery of novel diagnostic biomarkers and therapeutics are major goals in gastric cancer research. Here, we review recent progress in application of omics technologies in gastric cancer research, with special focus on the utilization of systems biology approaches to integrate multi-omics data. In addition, the association between gastrointestinal microbiota and gastric cancer are discussed, which may offer insights in exploring the novel microbiota-targeted therapeutics. Finally, the application of data-driven systems biology and machine learning approaches could provide a predictive understanding of gastric cancer, and pave the way to the development of novel biomarkers and rational design of cancer therapeutics.
Collapse
Affiliation(s)
- Xiao-Jing Shi
- Laboratory Animal Center, State Key Laboratory of Esophageal Cancer Prevention and Treatment, Academy of Medical Science, Zhengzhou University, Zhengzhou, China
| | - Yongjun Wei
- School of Pharmaceutical Sciences, Key Laboratory of Advanced Drug Preparation Technologies, Ministry of Education, Zhengzhou University, Zhengzhou, China
| | - Boyang Ji
- Department of Biology and Biological Engineering, Chalmers University of Technology, Gothenburg, Sweden
- Novo Nordisk Foundation Center for Biosustainability, Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
309
|
Igarashi S, Sasaki Y, Mikami T, Sakuraba H, Fukuda S. Anatomical classification of upper gastrointestinal organs under various image capture conditions using AlexNet. Comput Biol Med 2020; 124:103950. [PMID: 32798923 DOI: 10.1016/j.compbiomed.2020.103950] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 07/28/2020] [Accepted: 07/28/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND Machine learning has led to several endoscopic studies about the automated localization of digestive lesions and prediction of cancer invasion depth. Training and validation dataset collection are required for a disease in each digestive organ under a similar image capture condition; this is the first step in system development. This data cleansing task in data collection causes a great burden among experienced endoscopists. Thus, this study classified upper gastrointestinal (GI) organ images obtained via routine esophagogastroduodenoscopy (EGD) into precise anatomical categories using AlexNet. METHOD In total, 85,246 raw upper GI endoscopic images from 441 patients with gastric cancer were collected retrospectively. The images were manually classified into 14 categories: 0) white-light (WL) stomach with indigo carmine (IC); 1) WL esophagus with iodine; 2) narrow-band (NB) esophagus; 3) NB stomach with IC; 4) NB stomach; 5) WL duodenum; 6) WL esophagus; 7) WL stomach; 8) NB oral-pharynx-larynx; 9) WL oral-pharynx-larynx; 10) WL scaling paper; 11) specimens; 12) WL muscle fibers during endoscopic submucosal dissection (ESD); and 13) others. AlexNet is a deep learning framework and was trained using 49,174 datasets and validated using 36,072 independent datasets. RESULTS The accuracy rates of the training and validation dataset were 0.993 and 0.965, respectively. CONCLUSIONS A simple anatomical organ classifier using AlexNet was developed and found to be effective in data cleansing task for collection of EGD images. Moreover, it could be useful to both expert and non-expert endoscopists as well as engineers in retrospectively assessing upper GI images.
Collapse
Affiliation(s)
- Shohei Igarashi
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| | - Yoshihiro Sasaki
- Department of Medical Informatics, Hirosaki University Hospital, 53 Hon-cho, Hirosaki, 036-8563, Japan.
| | - Tatsuya Mikami
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| | - Hirotake Sakuraba
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| | - Shinsaku Fukuda
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| |
Collapse
|
310
|
Namikawa K, Hirasawa T, Yoshio T, Fujisaki J, Ozawa T, Ishihara S, Aoki T, Yamada A, Koike K, Suzuki H, Tada T. Utilizing artificial intelligence in endoscopy: a clinician's guide. Expert Rev Gastroenterol Hepatol 2020; 14:689-706. [PMID: 32500760 DOI: 10.1080/17474124.2020.1779058] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) that surpasses human ability in image recognition is expected to be applied in the field of gastrointestinal endoscopes. Accordingly, its research and development (R &D) is being actively conducted. With the development of endoscopic diagnosis, there is a shortage of specialists who can perform high-precision endoscopy. We will examine whether AI with excellent image recognition ability can overcome this problem. AREAS COVERED Since 2016, papers on artificial intelligence using convolutional neural network (CNN in other word Deep Learning) have been published. CNN is generally capable of more accurate detection and classification than conventional machine learning. This is a review of papers using CNN in the gastrointestinal endoscopy area, along with the reasons why AI is required in clinical practice. We divided this review into four parts: stomach, esophagus, large intestine, and capsule endoscope (small intestine). EXPERT OPINION Potential applications for the AI include colorectal polyp detection and differentiation, gastric and esophageal cancer detection, and lesion detection in capsule endoscopy. The accuracy of endoscopic diagnosis will increase if the AI and endoscopist perform the endoscopy together.
Collapse
Affiliation(s)
- Ken Namikawa
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research , Tokyo, Japan
| | - Toshiaki Hirasawa
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research , Tokyo, Japan
| | - Toshiyuki Yoshio
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research , Tokyo, Japan
| | - Junko Fujisaki
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research , Tokyo, Japan
| | - Tsuyoshi Ozawa
- Department of Surgery, Teikyo University School of Medicine , Tokyo, Japan
| | - Soichiro Ishihara
- Department of Surgical Oncology, Graduate School of Medicine, the University of Tokyo , Tokyo, Japan
| | - Tomonori Aoki
- Department of Gastroenterology, Graduate School of Medicine, the University of Tokyo , Tokyo, Japan
| | - Atsuo Yamada
- Department of Gastroenterology, Graduate School of Medicine, the University of Tokyo , Tokyo, Japan
| | - Kazuhiko Koike
- Department of Gastroenterology, Graduate School of Medicine, the University of Tokyo , Tokyo, Japan
| | - Hideo Suzuki
- Department of Gastroenterology, Graduate School of Institute Clinical Medicine, University of Tsukuba , Ibaraki, Japan
| | - Tomohiro Tada
- Department of Surgical Oncology, Graduate School of Medicine, the University of Tokyo , Tokyo, Japan.,AI Medical Service Inc ., Tokyo, Japan.,Tada Tomohiro the Institute of Gastroenterology and Proctology , Saitama, Japan
| |
Collapse
|
311
|
Guo Y, Hao Z, Zhao S, Gong J, Yang F. Artificial Intelligence in Health Care: Bibliometric Analysis. J Med Internet Res 2020; 22:e18228. [PMID: 32723713 PMCID: PMC7424481 DOI: 10.2196/18228] [Citation(s) in RCA: 114] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 04/22/2020] [Accepted: 05/14/2020] [Indexed: 02/06/2023] Open
Abstract
Background As a critical driving power to promote health care, the health care–related artificial intelligence (AI) literature is growing rapidly. Objective The purpose of this analysis is to provide a dynamic and longitudinal bibliometric analysis of health care–related AI publications. Methods The Web of Science (Clarivate PLC) was searched to retrieve all existing and highly cited AI-related health care research papers published in English up to December 2019. Based on bibliometric indicators, a search strategy was developed to screen the title for eligibility, using the abstract and full text where needed. The growth rate of publications, characteristics of research activities, publication patterns, and research hotspot tendencies were computed using the HistCite software. Results The search identified 5235 hits, of which 1473 publications were included in the analyses. Publication output increased an average of 17.02% per year since 1995, but the growth rate of research papers significantly increased to 45.15% from 2014 to 2019. The major health problems studied in AI research are cancer, depression, Alzheimer disease, heart failure, and diabetes. Artificial neural networks, support vector machines, and convolutional neural networks have the highest impact on health care. Nucleosides, convolutional neural networks, and tumor markers have remained research hotspots through 2019. Conclusions This analysis provides a comprehensive overview of the AI-related research conducted in the field of health care, which helps researchers, policy makers, and practitioners better understand the development of health care–related AI research and possible practice implications. Future AI research should be dedicated to filling in the gaps between AI health care research and clinical applications.
Collapse
Affiliation(s)
- Yuqi Guo
- School of Social Work, University of North Carolina at Charlotte, Charlotte, NC, United States
| | - Zhichao Hao
- School of Social Work, The University of Alabama, Tuscaloosa, AL, United States
| | - Shichong Zhao
- Social Welfare Program, School of Public Administration, Dongbei University of Finance and Economics, Dalian, China
| | - Jiaqi Gong
- Department of Information Systems, University of Maryland, Baltimore, MD, United States
| | - Fan Yang
- Social Welfare Program, School of Public Administration, Dongbei University of Finance and Economics, Dalian, China
| |
Collapse
|
312
|
Application of artificial intelligence in the diagnosis and prediction of gastric cancer. Artif Intell Gastroenterol 2020. [DOI: 10.35712/wjg.v1.i1.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
|
313
|
Qie YY, Xue XF, Wang XG, Dang SC. Application of artificial intelligence in the diagnosis and prediction of gastric cancer. Artif Intell Gastroenterol 2020; 1:12-18. [DOI: 10.35712/aig.v1.i1.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 07/13/2020] [Accepted: 07/16/2020] [Indexed: 02/06/2023] Open
Abstract
Gastric cancer is the second leading cause of cancer deaths worldwide. Despite the great progress in the diagnosis and treatment of gastric cancer, the incidence and mortality rate of the disease in China are still relatively high. The high mortality rate of gastric cancer may be related to its low early diagnosis rate and poor prognosis. Much research has been focused on improving the sensitivity and specificity of diagnostic tools for gastric cancer, in order to more accurately predict the survival times of gastric cancer patients. Taking appropriate treatment measures is the key to reducing the mortality rate of gastric cancer. In the past decade, artificial intelligence technology has been applied to various fields of medicine as a branch of computer science. This article discusses the application and research status of artificial intelligence in gastric cancer diagnosis and survival prediction.
Collapse
Affiliation(s)
- Yin-Yin Qie
- Department of General Surgery, The Affiliated Hospital, Jiangsu University, Zhenjiang 212001, Jiangsu Province, China
| | - Xiao-Fei Xue
- Department of General Surgery, Pucheng Hospital, Weinan 715500, Shaanxi Province, China
| | - Xiao-Gang Wang
- Department of General Surgery, Pucheng Hospital, Weinan 715500, Shaanxi Province, China
| | - Sheng-Chun Dang
- Department of General Surgery, the Affiliated Hospital, Jiangsu University, Zhenjiang 212001, Jiangsu Province, China
- Department of General Surgery, Pucheng Hospital, Weinan 715500, Shaanxi Province, China
| |
Collapse
|
314
|
Jin HY, Zhang M, Hu B. Techniques to integrate artificial intelligence systems with medical information in gastroenterology. Artif Intell Gastrointest Endosc 2020; 1:19-27. [DOI: 10.37126/aige.v1.i1.19] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Revised: 07/07/2020] [Accepted: 07/14/2020] [Indexed: 02/06/2023] Open
Abstract
Gastrointestinal (GI) endoscopy is the central element in contemporary gastroenterology as it provides direct evidence to guide targeted therapy. To increase the accuracy of GI endoscopy and to reduce human-related errors, artificial intelligence (AI) has been applied in GI endoscopy, which has been proved to be effective in diagnosing and treating numerous diseases. Therefore, we review current research on the efficacy of AI-assisted GI endoscopy in order to assess its functions, advantages and how the design can be improved.
Collapse
Affiliation(s)
- Hong-Yu Jin
- Department of Liver Surgery, Liver Transplantation Center, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Man Zhang
- Department of Gynecology and Obstetrics, West China Second University Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Bing Hu
- Department of Gastroenterology, Endoscopy Center, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| |
Collapse
|
315
|
Masuzaki R, Kanda T, Sasaki R, Matsumoto N, Nirei K, Ogawa M, Moriyama M. Application of artificial intelligence in hepatology: Minireview. Artif Intell Gastroenterol 2020; 1:5-11. [DOI: 10.35712/aig.v1.i1.5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 06/23/2020] [Accepted: 07/16/2020] [Indexed: 02/06/2023] Open
Abstract
With the rapid advancements in computer science, artificial intelligence (AI) has become an intrinsic part of our daily life and clinical practices. The concepts of AI, such as machine learning, deep learning, and big data, are extensively used in clinical and basic research. In this review, we searched for the articles in PubMed and summarized recent developments of AI concerning hepatology while focusing on the diagnosis and risk assessment of liver diseases. Ultrasound is widely conducted for the routine surveillance of hepatocellular carcinoma along with tumor markers. Computer-aided diagnosis is useful in the detection of tumors and characterization of space-occupying lesions. The prognosis of hepatocellular carcinoma can be estimated via AI using large-scale and high-quality training datasets. The prevalence of nonalcoholic fatty liver disease is increasing worldwide and pivotal concern in the field is who will progress and develop hepatocellular carcinoma. Most AI studies require a large dataset, including laboratory or radiological findings and outcome data. AI will be useful in reducing medical errors, supporting clinical decisions, and predicting clinical outcomes. Thus, cooperation between AI and humans is expected to improve healthcare.
Collapse
Affiliation(s)
- Ryota Masuzaki
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| | - Tatsuo Kanda
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| | - Reina Sasaki
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| | - Naoki Matsumoto
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| | - Kazushige Nirei
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| | - Masahiro Ogawa
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| | - Mitsuhiko Moriyama
- Division of Gastroenterology and Hepatology, Department of Medicine, Nihon University School of Medicine, Tokyo 173-8610, Japan
| |
Collapse
|
316
|
Morreale GC, Sinagra E, Vitello A, Shahini E, Shahini E, Maida M. Emerging artificial intelligence applications in gastroenterology: A review of the literature. Artif Intell Gastrointest Endosc 2020; 1:6-18. [DOI: 10.37126/aige.v1.i1.6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 07/07/2020] [Accepted: 07/16/2020] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) allows machines to provide disruptive value in several industries and applications. Applications of AI techniques, specifically machine learning and more recently deep learning, are arising in gastroenterology. Computer-aided diagnosis for upper gastrointestinal endoscopy has growing attention for automated and accurate identification of dysplasia in Barrett’s esophagus, as well as for the detection of early gastric cancers (GCs), therefore preventing esophageal and gastric malignancies. Besides, convoluted neural network technology can accurately assess Helicobacter pylori (H. pylori) infection during standard endoscopy without the need for biopsies, thus, reducing gastric cancer risk. AI can potentially be applied during colonoscopy to automatically discover colorectal polyps and differentiate between neoplastic and non-neoplastic ones, with the possible ability to improve adenoma detection rate, which changes broadly among endoscopists performing screening colonoscopies. In addition, AI permits to establish the feasibility of curative endoscopic resection of large colonic lesions based on the pit pattern characteristics. The aim of this review is to analyze current evidence from the literature, supporting recent technologies of AI both in upper and lower gastrointestinal diseases, including Barrett's esophagus, GC, H. pylori infection, colonic polyps and colon cancer.
Collapse
Affiliation(s)
| | - Emanuele Sinagra
- Gastroenterology and Endoscopy Unit, Fondazione Istituto G. Giglio, Cefalù 90015, Italy
| | - Alessandro Vitello
- Gastroenterology and Endoscopy Unit, S. Elia- M. Raimondi Hospital, Caltanissetta 93100, Italy
| | - Endrit Shahini
- Gastroenterology and Endoscopy Unit, Istituto di Candiolo, FPO-IRCCS, Candiolo (Torino) 93100, Italy
| | | | - Marcello Maida
- Gastroenterology and Endoscopy Unit, S. Elia- M. Raimondi Hospital, Caltanissetta 93100, Italy
| |
Collapse
|
317
|
Morreale GC, Sinagra E, Vitello A, Shahini E, Shahini E, Maida M. Emerging artificia intelligence applications in gastroenterology: A review of the literature. Artif Intell Gastrointest Endosc 2020. [DOI: 10.37126/wjem.v1.i1.19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
|
318
|
The Impact of Artificial Intelligence in the Endoscopic Assessment of Premalignant and Malignant Esophageal Lesions: Present and Future. ACTA ACUST UNITED AC 2020; 56:medicina56070364. [PMID: 32708343 PMCID: PMC7404688 DOI: 10.3390/medicina56070364] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 07/13/2020] [Accepted: 07/16/2020] [Indexed: 02/07/2023]
Abstract
In the gastroenterology field, the impact of artificial intelligence was investigated for the purposes of diagnostics, risk stratification of patients, improvement in quality of endoscopic procedures and early detection of neoplastic diseases, implementation of the best treatment strategy, and optimization of patient prognosis. Computer-assisted diagnostic systems to evaluate upper endoscopy images have recently emerged as a supporting tool in endoscopy due to the risks of misdiagnosis related to standard endoscopy and different expertise levels of endoscopists, time-consuming procedures, lack of availability of advanced procedures, increasing workloads, and development of endoscopic mass screening programs. Recent research has tended toward computerized, automatic, and real-time detection of lesions, which are approaches that offer utility in daily practice. Despite promising results, certain studies might overexaggerate the diagnostic accuracy of artificial systems, and several limitations remain to be overcome in the future. Therefore, additional multicenter randomized trials and the development of existent database platforms are needed to certify clinical implementation. This paper presents an overview of the literature and the current knowledge of the usefulness of different types of machine learning systems in the assessment of premalignant and malignant esophageal lesions via conventional and advanced endoscopic procedures. This study makes a presentation of the artificial intelligence terminology and refers also to the most prominent recent research on computer-assisted diagnosis of neoplasia on Barrett’s esophagus and early esophageal squamous cell carcinoma, and prediction of invasion depth in esophageal neoplasms. Furthermore, this review highlights the main directions of future doctor–computer collaborations in which machines are expected to improve the quality of medical action and routine clinical workflow, thus reducing the burden on physicians.
Collapse
|
319
|
Development and Validation of an Image-based Deep Learning Algorithm for Detection of Synchronous Peritoneal Carcinomatosis in Colorectal Cancer. Ann Surg 2020; 275:e645-e651. [PMID: 32694449 DOI: 10.1097/sla.0000000000004229] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE The aim of this study was to build a SVM classifier using ResNet-3D algorithm by artificial intelligence for prediction of synchronous PC. BACKGROUND Adequate detection and staging of PC from CRC remain difficult. METHODS The primary tumors in synchronous PC were delineated on preoperative contrast-enhanced computed tomography (CT) images. The features of adjacent peritoneum were extracted to build a ResNet3D + SVM classifier. The performance of ResNet3D + SVM classifier was evaluated in the test set and was compared to routine CT which was evaluated by radiologists. RESULTS The training set consisted of 19,814 images from 54 patients with PC and 76 patients without PC. The test set consisted of 7837 images from 40 test patients. The ResNet-3D spent only 34 seconds to analyze the test images. To increase the accuracy of PC detection, we have built a SVM classifier by integrating ResNet-3D features with twelve PC-specific features (P < 0.05). The ResNet3D + SVM classifier showed accuracy of 94.11% with AUC of 0.922 (0.912-0.944), sensitivity of 93.75%, specificity of 94.44%, PPV of 93.75%, and NPV of 94.44% in the test set. The performance was superior to routine contrast-enhanced CT (AUC: 0.791). CONCLUSIONS The ResNet3D + SVM classifier based on deep learning algorithm using ResNet-3D framework has shown great potential in prediction of synchronous PC in CRC.
Collapse
|
320
|
Jin P, Ji X, Kang W, Li Y, Liu H, Ma F, Ma S, Hu H, Li W, Tian Y. Artificial intelligence in gastric cancer: a systematic review. J Cancer Res Clin Oncol 2020; 146:2339-2350. [PMID: 32613386 DOI: 10.1007/s00432-020-03304-9] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 06/26/2020] [Indexed: 02/08/2023]
Abstract
OBJECTIVE This study aims to systematically review the application of artificial intelligence (AI) techniques in gastric cancer and to discuss the potential limitations and future directions of AI in gastric cancer. METHODS A systematic review was performed that follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Pubmed, EMBASE, the Web of Science, and the Cochrane Library were used to search for gastric cancer publications with an emphasis on AI that were published up to June 2020. The terms "artificial intelligence" and "gastric cancer" were used to search for the publications. RESULTS A total of 64 articles were included in this review. In gastric cancer, AI is mainly used for molecular bio-information analysis, endoscopic detection for Helicobacter pylori infection, chronic atrophic gastritis, early gastric cancer, invasion depth, and pathology recognition. AI may also be used to establish predictive models for evaluating lymph node metastasis, response to drug treatments, and prognosis. In addition, AI can be used for surgical training, skill assessment, and surgery guidance. CONCLUSIONS In the foreseeable future, AI applications can play an important role in gastric cancer management in the era of precision medicine.
Collapse
Affiliation(s)
- Peng Jin
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Xiaoyan Ji
- Department of Emergency Ward, First Teaching Hospital of Tianjin University of Traditional Chinese Medicine, Tianjin, 300193, China
| | - Wenzhe Kang
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Yang Li
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Hao Liu
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Fuhai Ma
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Shuai Ma
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Haitao Hu
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Weikun Li
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Yantao Tian
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China.
| |
Collapse
|
321
|
Inaba A, Hori K, Yoda Y, Ikematsu H, Takano H, Matsuzaki H, Watanabe Y, Takeshita N, Tomioka T, Ishii G, Fujii S, Hayashi R, Yano T. Artificial intelligence system for detecting superficial laryngopharyngeal cancer with high efficiency of deep learning. Head Neck 2020; 42:2581-2592. [PMID: 32542892 DOI: 10.1002/hed.26313] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 04/18/2020] [Accepted: 05/15/2020] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND There are no published reports evaluating the ability of artificial intelligence (AI) in the endoscopic diagnosis of superficial laryngopharyngeal cancer (SLPC). We presented our newly developed diagnostic AI model for SLPC detection. METHODS We used RetinaNet for object detection. SLPC and normal laryngopharyngeal mucosal images obtained from narrow-band imaging were used for the learning and validation data sets. Each independent data set comprised 400 SLPC and 800 normal mucosal images. The diagnostic AI model was constructed stage-wise and evaluated at each learning stage using validation data sets. RESULTS In the validation data sets (100 SLPC cases), the median tumor size was 13.2 mm; flat/elevated/depressed types were found in 77/21/2 cases. Sensitivity, specificity, and accuracy improved each time a learning image was added and were 95.5%, 98.4%, and 97.3%, respectively, after learning all SLPC and normal mucosal images. CONCLUSIONS The novel AI model is helpful for detection of laryngopharyngeal cancer at an early stage.
Collapse
Affiliation(s)
- Atsushi Inaba
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Chiba, Japan.,Course of Advanced Clinical Research of Cancer, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo, Japan
| | - Keisuke Hori
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Yusuke Yoda
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Chiba, Japan.,Medical Device Innovation Center, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Hiroaki Ikematsu
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Chiba, Japan.,Division of Science and Technology for Endoscopy, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center East, Kashiwa, Chiba, Japan
| | - Hiroaki Takano
- Medical Device Innovation Center, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Hiroki Matsuzaki
- Medical Device Innovation Center, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Yoshiki Watanabe
- Department of Medical Information, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Nobuyoshi Takeshita
- Medical Device Innovation Center, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Toshifumi Tomioka
- Department of Head and Neck Surgery, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Genichiro Ishii
- Course of Advanced Clinical Research of Cancer, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo, Japan.,Division of Pathology, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center East, Kashiwa, Chiba, Japan
| | - Satoshi Fujii
- Division of Pathology, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center East, Kashiwa, Chiba, Japan
| | - Ryuichi Hayashi
- Department of Head and Neck Surgery, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Tomonori Yano
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Chiba, Japan.,Medical Device Innovation Center, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| |
Collapse
|
322
|
Cho BJ, Bang CS, Lee JJ, Seo CW, Kim JH. Prediction of Submucosal Invasion for Gastric Neoplasms in Endoscopic Images Using Deep-Learning. J Clin Med 2020; 9:jcm9061858. [PMID: 32549190 PMCID: PMC7356204 DOI: 10.3390/jcm9061858] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 05/31/2020] [Accepted: 06/09/2020] [Indexed: 02/06/2023] Open
Abstract
Endoscopic resection is recommended for gastric neoplasms confined to mucosa or superficial submucosa. The determination of invasion depth is based on gross morphology assessed in endoscopic images, or on endoscopic ultrasound. These methods have limited accuracy and pose an inter-observer variability. Several studies developed deep-learning (DL) algorithms classifying invasion depth of gastric cancers. Nevertheless, these algorithms are intended to be used after definite diagnosis of gastric cancers, which is not always feasible in various gastric neoplasms. This study aimed to establish a DL algorithm for accurately predicting submucosal invasion in endoscopic images of gastric neoplasms. Pre-trained convolutional neural network models were fine-tuned with 2899 white-light endoscopic images. The prediction models were subsequently validated with an external dataset of 206 images. In the internal test, the mean area under the curve discriminating submucosal invasion was 0.887 (95% confidence interval: 0.849–0.924) by DenseNet−161 network. In the external test, the mean area under the curve reached 0.887 (0.863–0.910). Clinical simulation showed that 6.7% of patients who underwent gastrectomy in the external test were accurately qualified by the established algorithm for potential endoscopic resection, avoiding unnecessary operation. The established DL algorithm proves useful for the prediction of submucosal invasion in endoscopic images of gastric neoplasms.
Collapse
Affiliation(s)
- Bum-Joo Cho
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang 14068, Korea;
- Department of Ophthalmology, Hallym University Sacred Heart Hospital, Anyang 14068, Korea
- Division of Biomedical Informatics, Seoul National University Biomedical Informatics (SNUBI), Seoul National University College of Medicine, Seoul 03080, Korea;
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Correspondence: (B.-J.C.); (C.S.B.); Tel.: +82-31-380-3835 (B.-J.C.); +82-33-240-5821 (C.S.B.)
| | - Chang Seok Bang
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon 24253, Korea
- Correspondence: (B.-J.C.); (C.S.B.); Tel.: +82-31-380-3835 (B.-J.C.); +82-33-240-5821 (C.S.B.)
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Chang Won Seo
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang 14068, Korea;
| | - Ju Han Kim
- Division of Biomedical Informatics, Seoul National University Biomedical Informatics (SNUBI), Seoul National University College of Medicine, Seoul 03080, Korea;
| |
Collapse
|
323
|
Hashimoto R, Requa J, Dao T, Ninh A, Tran E, Mai D, Lugo M, El-Hage Chehade N, Chang KJ, Karnes WE, Samarasena JB. Artificial intelligence using convolutional neural networks for real-time detection of early esophageal neoplasia in Barrett's esophagus (with video). Gastrointest Endosc 2020; 91:1264-1271.e1. [PMID: 31930967 DOI: 10.1016/j.gie.2019.12.049] [Citation(s) in RCA: 116] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 12/30/2019] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS The visual detection of early esophageal neoplasia (high-grade dysplasia and T1 cancer) in Barrett's esophagus (BE) with white-light and virtual chromoendoscopy still remains challenging. The aim of this study was to assess whether a convolutional neural artificial intelligence network can aid in the recognition of early esophageal neoplasia in BE. METHODS Nine hundred sixteen images from 65 patients of histology-proven early esophageal neoplasia in BE containing high-grade dysplasia or T1 cancer were collected. The area of neoplasia was masked using image annotation software. Nine hundred nineteen control images were collected of BE without high-grade dysplasia. A convolutional neural network (CNN) algorithm was pretrained on ImageNet and then fine-tuned with the goal of providing the correct binary classification of "dysplastic" or "nondysplastic." We developed an object detection algorithm that drew localization boxes around regions classified as dysplasia. RESULTS The CNN analyzed 458 test images (225 dysplasia and 233 nondysplasia) and correctly detected early neoplasia with sensitivity of 96.4%, specificity of 94.2%, and accuracy of 95.4%. With regard to the object detection algorithm for all images in the validation set, the system was able to achieve a mean average precision of .7533 at an intersection over union of .3 CONCLUSIONS: In this pilot study, our artificial intelligence model was able to detect early esophageal neoplasia in BE images with high accuracy. In addition, the object detection algorithm was able to draw a localization box around the areas of dysplasia with high precision and at a speed that allows for real-time implementation.
Collapse
Affiliation(s)
- Rintaro Hashimoto
- H. H. Chao Comprehensive Digestive Disease Center, Division of Gastroenterology & Hepatology, Department of Medicine, University of California, Irvine, Orange, California, USA
| | | | | | | | - Elise Tran
- H. H. Chao Comprehensive Digestive Disease Center, Division of Gastroenterology & Hepatology, Department of Medicine, University of California, Irvine, Orange, California, USA
| | - Daniel Mai
- H. H. Chao Comprehensive Digestive Disease Center, Division of Gastroenterology & Hepatology, Department of Medicine, University of California, Irvine, Orange, California, USA
| | - Michael Lugo
- H. H. Chao Comprehensive Digestive Disease Center, Division of Gastroenterology & Hepatology, Department of Medicine, University of California, Irvine, Orange, California, USA
| | - Nabil El-Hage Chehade
- H. H. Chao Comprehensive Digestive Disease Center, Division of Gastroenterology & Hepatology, Department of Medicine, University of California, Irvine, Orange, California, USA
| | - Kenneth J Chang
- H. H. Chao Comprehensive Digestive Disease Center, Division of Gastroenterology & Hepatology, Department of Medicine, University of California, Irvine, Orange, California, USA
| | - Williams E Karnes
- H. H. Chao Comprehensive Digestive Disease Center, Division of Gastroenterology & Hepatology, Department of Medicine, University of California, Irvine, Orange, California, USA
| | - Jason B Samarasena
- H. H. Chao Comprehensive Digestive Disease Center, Division of Gastroenterology & Hepatology, Department of Medicine, University of California, Irvine, Orange, California, USA
| |
Collapse
|
324
|
Qian Y, Qiu Y, Li CC, Wang ZY, Cao BW, Huang HX, Ni YH, Chen LL, Sun JY. A novel diagnostic method for pituitary adenoma based on magnetic resonance imaging using a convolutional neural network. Pituitary 2020; 23:246-252. [PMID: 32062801 DOI: 10.1007/s11102-020-01032-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
PURPOSE This study was designed to develop a computer-aided diagnosis (CAD) system based on a convolutional neural network (CNN) to diagnose patients with pituitary tumors. METHODS We included adult patients clinically diagnosed with pituitary adenoma (pituitary adenoma group), or adult individuals without pituitary adenoma (control group). After pre-processing, all the MRI data were randomly divided into training or testing datasets in a ratio of 8:2 to create or evaluate the CNN model. Multiple CNNs with the same structure were applied for different types of MR images respectively, and a comprehensive diagnosis was performed based on the classification results of different types of MR images using an equal-weighted majority voting strategy. Finally, we assessed the diagnostic performance of the CAD system by accuracy, sensitivity, specificity, positive predictive value, and F1 score. RESULTS We enrolled 149 participants with 796 MR images and adopted the data augmentation technology to create 7960 new images. The proposed CAD method showed remarkable diagnostic performance with an overall accuracy of 91.02%, sensitivity of 92.27%, specificity of 75.70%, positive predictive value of 93.45%, and F1-score of 92.67% in separate MRI type. In the comprehensive diagnosis, the CAD achieved better performance with accuracy, sensitivity, and specificity of 96.97%, 94.44%, and 100%, respectively. CONCLUSION The CAD system could accurately diagnose patients with pituitary tumors based on MR images. Further, we will improve this CAD system by augmenting the amount of dataset and evaluate its performance by external dataset.
Collapse
Affiliation(s)
- Yu Qian
- Department of Neurosurgery, Jiangsu University Affiliated People's Hospital, Zhenjiang, 212002, Jiangsu, China
- Department of Neurosurgery, Zhenjiang Clinical Medical College of Nanjing Medical University, Zhenjiang, 212002, Jiangsu, China
| | - Yue Qiu
- The First Clinical Medical College of Nanjing Medical University, Nanjing, 210029, Jiangsu, China
| | - Cheng-Cheng Li
- College of Intelligence and Computing, Tianjin University, Tianjin, 300072, China
| | - Zhong-Yuan Wang
- The First Clinical Medical College of Nanjing Medical University, Nanjing, 210029, Jiangsu, China
| | - Bo-Wen Cao
- The First Clinical Medical College of Nanjing Medical University, Nanjing, 210029, Jiangsu, China
| | - Hong-Xin Huang
- The First Clinical Medical College of Nanjing Medical University, Nanjing, 210029, Jiangsu, China
| | - Yi-Hong Ni
- The First Clinical Medical College of Nanjing Medical University, Nanjing, 210029, Jiangsu, China
| | - Lu-Lu Chen
- Department of Anatomy, Histology, and Embryology, Nanjing Medical University Nanjing, Jiangsu, 211166, China.
- Key Laboratory for Aging and Disease, Nanjing Medical University Nanjing, Jiangsu, 210029, China.
| | - Jin-Yu Sun
- The First Clinical Medical College of Nanjing Medical University, Nanjing, 210029, Jiangsu, China.
| |
Collapse
|
325
|
Automated Detection and Segmentation of Early Gastric Cancer from Endoscopic Images Using Mask R-CNN. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10113842] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Gastrointestinal endoscopy is widely conducted for the early detection of gastric cancer. However, it is often difficult to detect early gastric cancer lesions and accurately evaluate the invasive regions. Our study aimed to develop a detection and segmentation method for early gastric cancer regions from gastrointestinal endoscopic images. In this method, we first collected 1208 healthy and 533 cancer images. The gastric cancer region was detected and segmented from endoscopic images using Mask R-CNN, an instance segmentation method. An endoscopic image was provided to the Mask R-CNN, and a bounding box and a label image of the gastric cancer region were obtained. As a performance evaluation via five-fold cross-validation, sensitivity and false positives (FPs) per image were 96.0% and 0.10 FP/image, respectively. In the evaluation of segmentation of the gastric cancer region, the average Dice index was 71%. These results indicate that our proposed scheme may be useful for the detection of gastric cancer and evaluation of the invasive region in gastrointestinal endoscopy.
Collapse
|
326
|
Improving Efficacy of Endoscopic Diagnosis of Early Gastric Cancer: Gaps to Overcome from the Real-World Practice in Vietnam. BIOMED RESEARCH INTERNATIONAL 2020; 2020:7239075. [PMID: 32420364 PMCID: PMC7201490 DOI: 10.1155/2020/7239075] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2019] [Accepted: 02/28/2020] [Indexed: 12/11/2022]
Abstract
Objective To identify factors associated with increased proportion of early gastric cancer to total detected gastric cancer among patients undergoing diagnostic esophagogastroduodenoscopy. Methods A nationwide survey was conducted across 6 central-type and 6 municipal-type Vietnamese hospitals. A questionnaire regarding annual esophagogastroduodenoscopy volume, esophagogastroduodenoscopy preparation, the use of image-enhanced endoscopy, and number of gastric cancer diagnosed in 2018 was sent to each hospital. Results The total proportion of early gastric cancer was 4.0% (115/2857). Routine preparation with simethicone and the use of image-enhanced endoscopy were associated with higher proportion of early gastric cancer (OR 1.9, 95% CI: 1.1-3.2, p = 0.016; OR 2.7, 95% CI: 1.8-4.0, p < 0.001, respectively). Esophagogastroduodenoscopies performed at central-type hospitals were associated with higher proportion of early gastric cancer (OR 1.9, 95% CI: 1.1-3.2, p = 0.017). Esophagogastroduodenoscopies performed at hospitals with an annual volume of 30.000-60.000 were associated with higher proportion of early gastric cancer than those performed at hospitals with an annual volume of 10.000-<30.000 (OR 2.7, 95% CI: 1.6-4.8, p < 0.001) and with a volume of >60.000-100.000 (OR 2.7, 95% CI: 1.7-4.2, p < 0.001). Only four (33.3%) hospitals reported all endoscopic types of early gastric cancer. Conclusions The detection of early gastric cancer is still challenging even for endoscopists working in regions with relatively high prevalence. The real-world evidence showed that endoscopic detection of early gastric cancer could potentially improve with simple adjustments of esophagogastroduodenoscopy protocols.
Collapse
|
327
|
Horiuchi Y, Aoyama K, Tokai Y, Hirasawa T, Yoshimizu S, Ishiyama A, Yoshio T, Tsuchida T, Fujisaki J, Tada T. Convolutional Neural Network for Differentiating Gastric Cancer from Gastritis Using Magnified Endoscopy with Narrow Band Imaging. Dig Dis Sci 2020; 65:1355-1363. [PMID: 31584138 DOI: 10.1007/s10620-019-05862-6] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Accepted: 09/24/2019] [Indexed: 02/07/2023]
Abstract
BACKGROUND Early detection of early gastric cancer (EGC) allows for less invasive cancer treatment. However, differentiating EGC from gastritis remains challenging. Although magnifying endoscopy with narrow band imaging (ME-NBI) is useful for differentiating EGC from gastritis, this skill takes substantial effort. Since the development of the ability to convolve the image while maintaining the characteristics of the input image (convolution neural network: CNN), allowing the classification of the input image (CNN system), the image recognition ability of CNN has dramatically improved. AIMS To explore the diagnostic ability of the CNN system with ME-NBI for differentiating between EGC and gastritis. METHODS A 22-layer CNN system was pre-trained using 1492 EGC and 1078 gastritis images from ME-NBI. A separate test data set (151 EGC and 107 gastritis images based on ME-NBI) was used to evaluate the diagnostic ability [accuracy, sensitivity, positive predictive value (PPV), and negative predictive value (NPV)] of the CNN system. RESULTS The accuracy of the CNN system with ME-NBI images was 85.3%, with 220 of the 258 images being correctly diagnosed. The method's sensitivity, specificity, PPV, and NPV were 95.4%, 71.0%, 82.3%, and 91.7%, respectively. Seven of the 151 EGC images were recognized as gastritis, whereas 31 of the 107 gastritis images were recognized as EGC. The overall test speed was 51.83 images/s (0.02 s/image). CONCLUSIONS The CNN system with ME-NBI can differentiate between EGC and gastritis in a short time with high sensitivity and NPV. Thus, the CNN system may complement current clinical practice of diagnosis with ME-NBI.
Collapse
Affiliation(s)
- Yusuke Horiuchi
- Department of Gastroenterology, Cancer Institute Hospital, 3-10-6 Ariake, Koto-ku, Tokyo, 135-8550, Japan.
| | - Kazuharu Aoyama
- AI Medical Service Inc., Arai Building 2F, 1-10-13 Minami Ikebukuro, Toshima-ku, Tokyo, 171-0022, Japan
| | - Yoshitaka Tokai
- Department of Gastroenterology, Cancer Institute Hospital, 3-10-6 Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Toshiaki Hirasawa
- Department of Gastroenterology, Cancer Institute Hospital, 3-10-6 Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Shoichi Yoshimizu
- Department of Gastroenterology, Cancer Institute Hospital, 3-10-6 Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Akiyoshi Ishiyama
- Department of Gastroenterology, Cancer Institute Hospital, 3-10-6 Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Toshiyuki Yoshio
- Department of Gastroenterology, Cancer Institute Hospital, 3-10-6 Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Tomohiro Tsuchida
- Department of Gastroenterology, Cancer Institute Hospital, 3-10-6 Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Junko Fujisaki
- Department of Gastroenterology, Cancer Institute Hospital, 3-10-6 Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Tomohiro Tada
- AI Medical Service Inc., Arai Building 2F, 1-10-13 Minami Ikebukuro, Toshima-ku, Tokyo, 171-0022, Japan
- Tada Tomohiro Institute of Gastroenterology and Proctology, 7-2-1 Bessho, Minami-ku, Saitama, 336-0021, Japan
| |
Collapse
|
328
|
Zhang Y, Li F, Yuan F, Zhang K, Huo L, Dong Z, Lang Y, Zhang Y, Wang M, Gao Z, Qin Z, Shen L. Diagnosing chronic atrophic gastritis by gastroscopy using artificial intelligence. Dig Liver Dis 2020; 52:566-572. [PMID: 32061504 DOI: 10.1016/j.dld.2019.12.146] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 12/28/2019] [Accepted: 12/31/2019] [Indexed: 12/11/2022]
Abstract
BACKGROUND The sensitivity of endoscopy in diagnosing chronic atrophic gastritis is only 42%, and multipoint biopsy, despite being more accurate, is not always available. AIMS This study aimed to construct a convolutional neural network to improve the diagnostic rate of chronic atrophic gastritis. METHODS We collected 5470 images of the gastric antrums of 1699 patients and labeled them with their pathological findings. Of these, 3042 images depicted atrophic gastritis and 2428 did not. We designed and trained a convolutional neural network-chronic atrophic gastritis model to diagnose atrophic gastritis accurately, verified by five-fold cross-validation. Moreover, the diagnoses of the deep learning model were compared with those of three experts. RESULTS The diagnostic accuracy, sensitivity, and specificity of the convolutional neural network-chronic atrophic gastritis model in diagnosing atrophic gastritis were 0.942, 0.945, and 0.940, respectively, which were higher than those of the experts. The detection rates of mild, moderate, and severe atrophic gastritis were 93%, 95%, and 99%, respectively. CONCLUSION Chronic atrophic gastritis could be diagnosed by gastroscopic images using the convolutional neural network-chronic atrophic gastritis model. This may greatly reduce the burden on endoscopy physicians, simplify diagnostic routines, and reduce costs for doctors and patients.
Collapse
Affiliation(s)
- Yaqiong Zhang
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Fengxia Li
- Department of Gastroenterology, Shanxi Provincial People's Hospital, Taiyuan, China.
| | - Fuqiang Yuan
- Baidu Online Network Technology (Beijing) Corporation, Beijing, China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Lijuan Huo
- Department of Gastroenterology, The First Hospital of Shanxi Medical University, Taiyuan, China
| | - Zichen Dong
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Yiming Lang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Yapeng Zhang
- Fenyang College of Shanxi Medical University, Fenyang, China
| | - Meihong Wang
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Zenghui Gao
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Zhenzhen Qin
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Leixue Shen
- School of Computer Science and Technology, Xidian University, Xi'an, China
| |
Collapse
|
329
|
A multi-scale recurrent fully convolution neural network for laryngeal leukoplakia segmentation. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101913] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
330
|
Gulati S, Patel M, Emmanuel A, Haji A, Hayee B, Neumann H. The future of endoscopy: Advances in endoscopic image innovations. Dig Endosc 2020; 32:512-522. [PMID: 31286574 DOI: 10.1111/den.13481] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2019] [Accepted: 07/01/2019] [Indexed: 02/08/2023]
Abstract
The latest state of the art technological innovations have led to a palpable progression in endoscopic imaging and may facilitate standardisation of practice. One of the most rapidly evolving modalities is artificial intelligence with recent studies providing real-time diagnoses and encouraging results in the first randomised trials to conventional endoscopic imaging. Advances in functional hypoxia imaging offer novel opportunities to be used to detect neoplasia and the assessment of colitis. Three-dimensional volumetric imaging provides spatial information and has shown promise in the increased detection of small polyps. Studies to date of self-propelling colonoscopes demonstrate an increased caecal intubation rate and possibly offer patients a more comfortable procedure. Further development in robotic technology has introduced ex vivo automated locomotor upper gastrointestinal and small bowel capsule devices. Eye-tracking has the potential to revolutionise endoscopic training through the identification of differences in experts and non-expert endoscopist as trainable parameters. In this review, we discuss the latest innovations of all these technologies and provide perspective into the exciting future of diagnostic luminal endoscopy.
Collapse
Affiliation(s)
- Shraddha Gulati
- King's Institute of Therapeutic Endoscopy, King's College Hospital NHS Foundation Trust, London, UK
| | - Mehul Patel
- King's Institute of Therapeutic Endoscopy, King's College Hospital NHS Foundation Trust, London, UK
| | - Andrew Emmanuel
- King's Institute of Therapeutic Endoscopy, King's College Hospital NHS Foundation Trust, London, UK
| | - Amyn Haji
- King's Institute of Therapeutic Endoscopy, King's College Hospital NHS Foundation Trust, London, UK
| | - Bu'Hussain Hayee
- King's Institute of Therapeutic Endoscopy, King's College Hospital NHS Foundation Trust, London, UK
| | - Helmut Neumann
- Department of Medicine, University Hospital Mainz, Mainz, Germany
| |
Collapse
|
331
|
Global updates in the treatment of gastric cancer: a systematic review. Part 2: perioperative management, multimodal therapies, new technologies, standardization of the surgical treatment and educational aspects. Updates Surg 2020; 72:355-378. [PMID: 32306277 DOI: 10.1007/s13304-020-00771-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2020] [Accepted: 04/11/2020] [Indexed: 12/24/2022]
Abstract
Gastric cancer is the fifth malignancy and the third cause of cancer death worldwide, according to the global cancer statistics presented in 2018. Its definition and staging have been revised in the eight edition of the AJCC/TNM classification, which took effect in 2018. Novel molecular classifications for GC have been recently established and the process of translating these classifications into clinical practice is ongoing. The cornerstone of GC treatment is surgical, in a context of multimodal therapy. Surgical treatment is being standardized, and is evolving according to new anatomical concepts and to the recent technological developments. This is leading to a massive improvement in the use of mini-invasive techniques. Mini-invasive techniques aim to be equivalent to open surgery from an oncologic point of view, with better short-term outcomes. The persecution of better short-term outcomes also includes the optimization of the perioperative management, which is being implemented on large scale according to the enhanced recovery after surgery principles. In the era of precision medicine, multimodal treatment is also evolving. The long-time-awaited results of many trials investigating the role for preoperative and postoperative management have been published, changing the clinical practice. Novel investigations focused both on traditional chemotherapeutic regimens and targeted therapies are currently ongoing. Modern platforms increase the possibility for further standardization of the different treatments, promote the use of big data and open new possibilities for surgical learning. This systematic review in two parts assesses all the current updates in GC treatment.
Collapse
|
332
|
Tokuyasu T, Iwashita Y, Matsunobu Y, Kamiyama T, Ishikake M, Sakaguchi S, Ebe K, Tada K, Endo Y, Etoh T, Nakashima M, Inomata M. Development of an artificial intelligence system using deep learning to indicate anatomical landmarks during laparoscopic cholecystectomy. Surg Endosc 2020; 35:1651-1658. [PMID: 32306111 PMCID: PMC7940266 DOI: 10.1007/s00464-020-07548-x] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Accepted: 04/04/2020] [Indexed: 12/14/2022]
Abstract
BACKGROUND The occurrence of bile duct injury (BDI) during laparoscopic cholecystectomy (LC) is an important medical issue. Expert surgeons prevent intraoperative BDI by identifying four landmarks. The present study aimed to develop a system that outlines these landmarks on endoscopic images in real time. METHODS An intraoperative landmark indication system was constructed using YOLOv3, which is an algorithm for object detection based on deep learning. The training datasets comprised approximately 2000 endoscopic images of the region of Calot's triangle in the gallbladder neck obtained from 76 videos of LC. The YOLOv3 learning model with the training datasets was applied to 23 videos of LC that were not used in training, to evaluate the estimation accuracy of the system to identify four landmarks: the cystic duct, common bile duct, lower edge of the left medial liver segment, and Rouviere's sulcus. Additionally, we constructed a prototype and used it in a verification experiment in an operation for a patient with cholelithiasis. RESULTS The YOLOv3 learning model was quantitatively and subjectively evaluated in this study. The average precision values for each landmark were as follows: common bile duct: 0.320, cystic duct: 0.074, lower edge of the left medial liver segment: 0.314, and Rouviere's sulcus: 0.101. The two expert surgeons involved in the annotation confirmed consensus regarding valid indications for each landmark in 22 of the 23 LC videos. In the verification experiment, the use of the intraoperative landmark indication system made the surgical team more aware of the landmarks. CONCLUSIONS Intraoperative landmark indication successfully identified four landmarks during LC, which may help to reduce the incidence of BDI, and thus, increase the safety of LC. The novel system proposed in the present study may prevent BDI during LC in clinical practice.
Collapse
Affiliation(s)
- Tatsushi Tokuyasu
- Faculty of Information Engineering, Department of Information and Systems Engineering, Fukuoka Institute of Technology, 3-30-1 Wajiro-higashi, Higashi-ku, Fukuoka-City, Fukuoka, 811-0295, Japan.
| | - Yukio Iwashita
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu-City, Oita, 879-5593, Japan
| | - Yusuke Matsunobu
- Faculty of Information Engineering, Department of Information and Systems Engineering, Fukuoka Institute of Technology, 3-30-1 Wajiro-higashi, Higashi-ku, Fukuoka-City, Fukuoka, 811-0295, Japan
| | - Toshiya Kamiyama
- Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Olympus Corporation, 2-3 Kuboyama-cho, Hachioji-City, Tokyo, 192-8512, Japan
| | - Makoto Ishikake
- Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Olympus Corporation, 2-3 Kuboyama-cho, Hachioji-City, Tokyo, 192-8512, Japan
| | - Seiichiro Sakaguchi
- Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Olympus Corporation, 2-3 Kuboyama-cho, Hachioji-City, Tokyo, 192-8512, Japan
| | - Kohei Ebe
- Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Olympus Corporation, 2-3 Kuboyama-cho, Hachioji-City, Tokyo, 192-8512, Japan
| | - Kazuhiro Tada
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu-City, Oita, 879-5593, Japan
| | - Yuichi Endo
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu-City, Oita, 879-5593, Japan
| | - Tsuyoshi Etoh
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu-City, Oita, 879-5593, Japan
| | - Makoto Nakashima
- Faculty of Science and Technology, Division of Computer Science and Intelligent Systems, Oita University, 700 Dannoharu, Oita-City, Oita, 870-1192, Japan
| | - Masafumi Inomata
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu-City, Oita, 879-5593, Japan
| |
Collapse
|
333
|
Hoogenboom SA, Bagci U, Wallace MB. Artificial intelligence in gastroenterology. The current state of play and the potential. How will it affect our practice and when? ACTA ACUST UNITED AC 2020. [DOI: 10.1016/j.tgie.2019.150634] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
334
|
Liu G, Hua J, Wu Z, Meng T, Sun M, Huang P, He X, Sun W, Li X, Chen Y. Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:486. [PMID: 32395530 PMCID: PMC7210177 DOI: 10.21037/atm.2020.03.24] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background Using deep learning techniques in image analysis is a dynamically emerging field. This study aims to use a convolutional neural network (CNN), a deep learning approach, to automatically classify esophageal cancer (EC) and distinguish it from premalignant lesions. Methods A total of 1,272 white-light images were adopted from 748 subjects, including normal cases, premalignant lesions, and cancerous lesions; 1,017 images were used to train the CNN, and another 255 images were examined to evaluate the CNN architecture. Our proposed CNN structure consists of two subnetworks (O-stream and P-stream). The original images were used as the inputs of the O-stream to extract the color and global features, and the pre-processed esophageal images were used as the inputs of the P-stream to extract the texture and detail features. Results The CNN system we developed achieved an accuracy of 85.83%, a sensitivity of 94.23%, and a specificity of 94.67% after the fusion of the 2 streams was accomplished. The classification accuracy of normal esophagus, premalignant lesion, and EC were 94.23%, 82.5%, and 77.14%, respectively, which shows a better performance than the Local Binary Patterns (LBP) + Support Vector Machine (SVM) and Histogram of Gradient (HOG) + SVM methods. A total of 8 of the 35 (22.85%) EC lesions were categorized as premalignant lesions because of their slightly reddish and flat lesions. Conclusions The CNN system, with 2 streams, demonstrated high sensitivity and specificity with the endoscopic images. It obtained better detection performance than the currently used methods based on the same datasets and has great application prospects in assisting endoscopists to distinguish esophageal lesion subclasses.
Collapse
Affiliation(s)
- Gaoshuang Liu
- Department of Geriatric Gerontology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Jie Hua
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Zhan Wu
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing 211102, China.,The Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing 211102, China
| | - Tianfang Meng
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing 211102, China.,The Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing 211102, China
| | - Mengxue Sun
- Department of Geriatric Gerontology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Peiyun Huang
- Department of Geriatric Gerontology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Xiaopu He
- Department of Geriatric Gerontology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Weihao Sun
- Department of Geriatric Gerontology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Xueliang Li
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Yang Chen
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing 211102, China.,The Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing 211102, China.,Centre de Recherche en Information Biomedicale Sino-Francais (LIA CRIBs), Rennes, France
| |
Collapse
|
335
|
Abadir AP, Ali MF, Karnes W, Samarasena JB. Artificial Intelligence in Gastrointestinal Endoscopy. Clin Endosc 2020; 53:132-141. [PMID: 32252506 PMCID: PMC7137570 DOI: 10.5946/ce.2020.038] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Accepted: 03/17/2020] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is rapidly integrating into modern technology and clinical practice. Although in its nascency, AI has become a hot topic of investigation for applications in clinical practice. Multiple fields of medicine have embraced the possibility of a future with AI assisting in diagnosis and pathology applications. In the field of gastroenterology, AI has been studied as a tool to assist in risk stratification, diagnosis, and pathologic identification. Specifically, AI has become of great interest in endoscopy as a technology with substantial potential to revolutionize the practice of a modern gastroenterologist. From cancer screening to automated report generation, AI has touched upon all aspects of modern endoscopy. Here, we review landmark AI developments in endoscopy. Starting with broad definitions to develop understanding, we will summarize the current state of AI research and its potential applications. With innovation developing rapidly, this article touches upon the remarkable advances in AI-assisted endoscopy since its initial evaluation at the turn of the millennium, and the potential impact these AI models may have on the modern clinical practice. As with any discussion of new technology, its limitations must also be understood to apply clinical AI tools successfully.
Collapse
Affiliation(s)
- Alexander P Abadir
- Department of Medicine, University of California Irvine, Orange, CA, USA
| | - Mohammed Fahad Ali
- Department of Medicine, University of California Irvine, Orange, CA, USA
| | - William Karnes
- Division of Gastroenterology & Hepatology, Department of Medicine, H. H. Chao Comprehensive Digestive Disease Center, University of California Irvine, Orange, CA, USA
| | - Jason B Samarasena
- Division of Gastroenterology & Hepatology, Department of Medicine, H. H. Chao Comprehensive Digestive Disease Center, University of California Irvine, Orange, CA, USA
| |
Collapse
|
336
|
Yoon HJ, Kim JH. Lesion-Based Convolutional Neural Network in Diagnosis of Early Gastric Cancer. Clin Endosc 2020; 53:127-131. [PMID: 32252505 PMCID: PMC7137575 DOI: 10.5946/ce.2020.046] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 03/13/2020] [Indexed: 12/27/2022] Open
Abstract
Diagnosis and evaluation of early gastric cancer (EGC) using endoscopic images is significantly important; however, it has some limitations. In several studies, the application of convolutional neural network (CNN) greatly enhanced the effectiveness of endoscopy. To maximize clinical usefulness, it is important to determine the optimal method of applying CNN for each organ and disease. Lesion�-based CNN is a type of deep learning model designed to learn the entire lesion from endoscopic images. This review describes the application of lesion-based CNN technology in diagnosis of EGC.
Collapse
Affiliation(s)
- Hong Jin Yoon
- Division of Gastroenterology, Department of Internal Medicine, Soonchunhyang University College of Medicine, Cheonan, Korea
| | - Jie-Hyun Kim
- Division of Gastroenterology, Department of Internal Medicine, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| |
Collapse
|
337
|
Gonçalves WGE, dos Santos MHDP, Lobato FMF, Ribeiro-dos-Santos Â, de Araújo GS. Deep learning in gastric tissue diseases: a systematic review. BMJ Open Gastroenterol 2020; 7:e000371. [PMID: 32337060 PMCID: PMC7170401 DOI: 10.1136/bmjgast-2019-000371] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 02/14/2020] [Accepted: 02/24/2020] [Indexed: 12/24/2022] Open
Abstract
Background In recent years, deep learning has gained remarkable attention in medical image analysis due to its capacity to provide results comparable to specialists and, in some cases, surpass them. Despite the emergence of deep learning research on gastric tissues diseases, few intensive reviews are addressing this topic. Method We performed a systematic review related to applications of deep learning in gastric tissue disease analysis by digital histology, endoscopy and radiology images. Conclusions This review highlighted the high potential and shortcomings in deep learning research studies applied to gastric cancer, ulcer, gastritis and non-malignant diseases. Our results demonstrate the effectiveness of gastric tissue analysis by deep learning applications. Moreover, we also identified gaps of evaluation metrics, and image collection availability, therefore, impacting experimental reproducibility.
Collapse
Affiliation(s)
- Wanderson Gonçalves e Gonçalves
- Laboratório de Genética Humana e Médica - Instituto de Ciências Biológicas, Universidade Federal do Pará, Belém, Pará, Brazil
- Núcleo de Pesquisas em Oncologia, Universidade Federal do Pará, Belém, Pará, Brazil
| | | | | | - Ândrea Ribeiro-dos-Santos
- Laboratório de Genética Humana e Médica - Instituto de Ciências Biológicas, Universidade Federal do Pará, Belém, Pará, Brazil
- Núcleo de Pesquisas em Oncologia, Universidade Federal do Pará, Belém, Pará, Brazil
| | - Gilderlanio Santana de Araújo
- Laboratório de Genética Humana e Médica - Instituto de Ciências Biológicas, Universidade Federal do Pará, Belém, Pará, Brazil
| |
Collapse
|
338
|
Gan T, Liu S, Yang J, Zeng B, Yang L. A pilot trial of Convolution Neural Network for automatic retention-monitoring of capsule endoscopes in the stomach and duodenal bulb. Sci Rep 2020; 10:4103. [PMID: 32139758 PMCID: PMC7057987 DOI: 10.1038/s41598-020-60969-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Accepted: 02/12/2020] [Indexed: 02/05/2023] Open
Abstract
The retention of a capsule endoscope (CE) in the stomach and the duodenal bulb during the examination is a troublesome problem, which can make the medical staff spend several hours observing whether the CE enters the descending segment of the duodenum (DSD). This paper investigated and evaluated the Convolution Neural Network (CNN) for automatic retention-monitoring of the CE in the stomach or the duodenal bulb. A trained CNN system based on 180,000 CE images of the DSD, stomach, and duodenal bulb was used to assess its recognition of the accuracy by calculating the area under the receiver operating characteristic curve (ROC-AUC), sensitivity and specificity. The AUC for distinguishing the DSD was 0.984. The sensitivity, specificity, positive predictive value, and negative predictive value of the CNN were 97.8%, 96.0%, 96.1% and 97.8%, respectively, at a cut-off value of 0.42 for the probability score. The deviated rate of the time into the DSD marked by the CNN at less than ±8 min was 95.7% (P < 0.01). These results indicate that the CNN for automatic retention-monitoring of the CE in the stomach or the duodenal bulb can be used as an efficient auxiliary measure in the clinical practice.
Collapse
Affiliation(s)
- Tao Gan
- Department of Gastroenterology and Hepatology, West China Hospital, Sichuan University, Chengdu, 610041, Sichuan, China
| | - Shuaicheng Liu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, Sichuan, China
| | - Jinlin Yang
- Department of Gastroenterology and Hepatology, West China Hospital, Sichuan University, Chengdu, 610041, Sichuan, China
| | - Bing Zeng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, Sichuan, China
| | - Li Yang
- Department of Gastroenterology and Hepatology, West China Hospital, Sichuan University, Chengdu, 610041, Sichuan, China.
| |
Collapse
|
339
|
Yasuda T, Hiroyasu T, Hiwa S, Okada Y, Hayashi S, Nakahata Y, Yasuda Y, Omatsu T, Obora A, Kojima T, Ichikawa H, Yagi N. Potential of automatic diagnosis system with linked color imaging for diagnosis of Helicobacter pylori infection. Dig Endosc 2020; 32:373-381. [PMID: 31398276 DOI: 10.1111/den.13509] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 08/06/2019] [Indexed: 12/17/2022]
Abstract
BACKGROUND AND AIM It is necessary to establish universal methods for endoscopic diagnosis of Helicobacter pylori (HP) infection, such as computer-aided diagnosis. In the present study, we propose a multistage diagnosis algorithm for HP infection. METHODS The aims of this study are to: (i) to construct an interpretable automatic diagnostic system using a support vector machine for HP infection; and (ii) to compare the diagnosis capability of our artificial intelligence (AI) system with that of endoscopists. Presence of an HP infection determined through linked color imaging (LCI) was learned through machine learning. Trained classifiers automatically diagnosed HP-positive and -negative patients examined using LCI. We retrospectively analyzed the new images from 105 consecutive patients; 42 were HP positive, 46 were post-eradication, and 17 were uninfected. Five endoscopic images per case taken from different areas were read into the AI system, and used in the HP diagnosis. RESULTS Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the diagnosis of HP infection using the AI system were 87.6%, 90.4%, 85.7%, 80.9%, and 93.1%, respectively. Accuracy of the AI system was higher than that of an inexperienced doctor, but there was no significant difference between the diagnosis of experienced physicians and the AI system. CONCLUSIONS The AI system can diagnose an HP infection with significant accuracy. There remains room for improvement, particularly for the diagnosis of post-eradication patients. By learning more images and considering a diagnosis algorithm for post-eradication patients, our new AI system will provide diagnostic support, particularly to inexperienced physicians.
Collapse
Affiliation(s)
- Takeshi Yasuda
- Department of Gastroenterology, Asahi University Hospital, Gifu, Japan
| | - Tomoyuki Hiroyasu
- Faculty of Life and Medical Sciences, Doshisha University, Kyoto, Japan
| | - Satoru Hiwa
- Faculty of Life and Medical Sciences, Doshisha University, Kyoto, Japan
| | - Yuto Okada
- Graduate School of Life and Medical Sciences, Doshisha University, Kyoto, Japan
| | - Sadanari Hayashi
- Department of Gastroenterology, Asahi University Hospital, Gifu, Japan
| | - Yuki Nakahata
- Department of Gastroenterology, Asahi University Hospital, Gifu, Japan
| | - Yuriko Yasuda
- Department of Gastroenterology, Asahi University Hospital, Gifu, Japan
| | - Tatsushi Omatsu
- Department of Gastroenterology, Asahi University Hospital, Gifu, Japan
| | - Akihiro Obora
- Department of Gastroenterology, Asahi University Hospital, Gifu, Japan
| | - Takao Kojima
- Department of Gastroenterology, Asahi University Hospital, Gifu, Japan
| | - Hiroshi Ichikawa
- Faculty of Life and Medical Sciences, Doshisha University, Kyoto, Japan
| | - Nobuaki Yagi
- Department of Gastroenterology, Asahi University Hospital, Gifu, Japan
| |
Collapse
|
340
|
Adenocarcinoma Recognition in Endoscopy Images Using Optimized Convolutional Neural Networks. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10051650] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Colonoscopy, which refers to the endoscopic examination of colon using a camera, is considered as the most effective method for diagnosis of colorectal cancer. Colonoscopy is performed by a medical doctor who visually inspects one’s colon to find protruding or cancerous polyps. In some situations, these polyps are difficult to find by the human eye, which may lead to a misdiagnosis. In recent years, deep learning has revolutionized the field of computer vision due to its exemplary performance. This study proposes a Convolutional Neural Network (CNN) architecture for classifying colonoscopy images as normal, adenomatous polyps, and adenocarcinoma. The main objective of this study is to aid medical practitioners in the correct diagnosis of colorectal cancer. Our proposed CNN architecture consists of 43 convolutional layers and one fully-connected layer. We trained and evaluated our proposed network architecture on the colonoscopy image dataset with 410 test subjects provided by Gachon University Hospital. Our experimental results showed an accuracy of 94.39% over 410 test subjects.
Collapse
|
341
|
Tsuboi A, Oka S, Aoyama K, Saito H, Aoki T, Yamada A, Matsuda T, Fujishiro M, Ishihara S, Nakahori M, Koike K, Tanaka S, Tada T. Artificial intelligence using a convolutional neural network for automatic detection of small-bowel angioectasia in capsule endoscopy images. Dig Endosc 2020; 32:382-390. [PMID: 31392767 DOI: 10.1111/den.13507] [Citation(s) in RCA: 88] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Accepted: 08/04/2019] [Indexed: 12/12/2022]
Abstract
BACKGROUND AND AIM Although small-bowel angioectasia is reported as the most common cause of bleeding in patients and frequently diagnosed by capsule endoscopy (CE) in patients with obscure gastrointestinal bleeding, a computer-aided detection method has not been established. We developed an artificial intelligence system with deep learning that can automatically detect small-bowel angioectasia in CE images. METHODS We trained a deep convolutional neural network (CNN) system based on Single Shot Multibox Detector using 2237 CE images of angioectasia. We assessed its diagnostic accuracy by calculating the area under the receiver operating characteristic curve (ROC-AUC), sensitivity, specificity, positive predictive value, and negative predictive value using an independent test set of 10 488 small-bowel images, including 488 images of small-bowel angioectasia. RESULTS The AUC to detect angioectasia was 0.998. Sensitivity, specificity, positive predictive value, and negative predictive value of CNN were 98.8%, 98.4%, 75.4%, and 99.9%, respectively, at a cut-off value of 0.36 for the probability score. CONCLUSIONS We developed and validated a new system based on CNN to automatically detect angioectasia in CE images. This may be well applicable to daily clinical practice to reduce the burden of physicians as well as to reduce oversight.
Collapse
Affiliation(s)
- Akiyoshi Tsuboi
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Shiro Oka
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | | | - Hiroaki Saito
- Department of Gastroenterology, Sendai Kousei Hospital, Miyagi, Japan
| | | | | | - Tomoki Matsuda
- Department of Gastroenterology, Sendai Kousei Hospital, Miyagi, Japan
| | - Mitsuhiro Fujishiro
- Department of Gastroenterology & Hepatology, Nagoya University Graduate School of Medicine, Aichi, Japan
| | - Soichiro Ishihara
- Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.,Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
| | - Masato Nakahori
- Department of Gastroenterology, Sendai Kousei Hospital, Miyagi, Japan
| | | | - Shinji Tanaka
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Tomohiro Tada
- AI Medical Service Inc., Tokyo, Japan.,Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.,Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
| |
Collapse
|
342
|
Ren J, Jing X, Wang J, Ren X, Xu Y, Yang Q, Ma L, Sun Y, Xu W, Yang N, Zou J, Zheng Y, Chen M, Gan W, Xiang T, An J, Liu R, Lv C, Lin K, Zheng X, Lou F, Rao Y, Yang H, Liu K, Liu G, Lu T, Zheng X, Zhao Y. Automatic Recognition of Laryngoscopic Images Using a Deep-Learning Technique. Laryngoscope 2020; 130:E686-E693. [PMID: 32068890 DOI: 10.1002/lary.28539] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 12/17/2019] [Accepted: 12/30/2019] [Indexed: 02/05/2023]
Abstract
OBJECTIVES/HYPOTHESIS To develop a deep-learning-based computer-aided diagnosis system for distinguishing laryngeal neoplasms (benign, precancerous lesions, and cancer) and improve the clinician-based accuracy of diagnostic assessments of laryngoscopy findings. STUDY DESIGN Retrospective study. METHODS A total of 24,667 laryngoscopy images (normal, vocal nodule, polyps, leukoplakia and malignancy) were collected to develop and test a convolutional neural network (CNN)-based classifier. A comparison between the proposed CNN-based classifier and the clinical visual assessments (CVAs) by 12 otolaryngologists was conducted. RESULTS In the independent testing dataset, an overall accuracy of 96.24% was achieved; for leukoplakia, benign, malignancy, normal, and vocal nodule, the sensitivity and specificity were 92.8% vs. 98.9%, 97% vs. 99.7%, 89% vs. 99.3%, 99.0% vs. 99.4%, and 97.2% vs. 99.1%, respectively. Furthermore, when compared with CVAs on the randomly selected test dataset, the CNN-based classifier outperformed physicians for most laryngeal conditions, with striking improvements in the ability to distinguish nodules (98% vs. 45%, P < .001), polyps (91% vs. 86%, P < .001), leukoplakia (91% vs. 65%, P < .001), and malignancy (90% vs. 54%, P < .001). CONCLUSIONS The CNN-based classifier can provide a valuable reference for the diagnosis of laryngeal neoplasms during laryngoscopy, especially for distinguishing benign, precancerous, and cancer lesions. LEVEL OF EVIDENCE NA Laryngoscope, 130:E686-E693, 2020.
Collapse
Affiliation(s)
- Jianjun Ren
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China.,Medical Oncology and Medical Biophysics, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Xueping Jing
- Department of Automation, College of Electrical Engineering and Information Technology, Sichuan University, Chengdu, China.,Department of Radiation Oncology, University Medical Center Groningen, Groningen, The Netherlands
| | - Jing Wang
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China
| | - Xue Ren
- Department of Economic Statistics, School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai, China
| | - Yang Xu
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China
| | - Qiuyun Yang
- Department of Forensics, West China School of Preclinical and Forensic Medicine, Sichuan University, Chengdu, China
| | - Lanzhi Ma
- Department of Preclinical Medicine, West China School of Preclinical and Forensic Medicine, Sichuan University, Chengdu, China
| | - Yi Sun
- Department of Preclinical Medicine, West China School of Preclinical and Forensic Medicine, Sichuan University, Chengdu, China
| | - Wei Xu
- Department of Biostatistics, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Ning Yang
- College of Computer Science, Sichuan University, Chengdu, China
| | - Jian Zou
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China
| | - Yongbo Zheng
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China
| | - Min Chen
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China
| | - Weigang Gan
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China
| | - Ting Xiang
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China
| | - Junnan An
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China
| | - Ruiqing Liu
- Department of Otorhinolaryngology, Kunming City Women and Children Hospital, Kunming, China
| | - Cao Lv
- Department of Otorhinolaryngology, The Second Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Ken Lin
- Department of Otorhinolaryngology, The Affiliated Children's Hospital of Kunming Medical University, Kunming, China
| | - Xianfeng Zheng
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China
| | - Fan Lou
- Department of Otorhinolaryngology, The Affiliated Children's Hospital of Kunming Medical University, Kunming, China
| | - Yufang Rao
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China
| | - Hui Yang
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China
| | - Kai Liu
- Department of Automation, College of Electrical Engineering and Information Technology, Sichuan University, Chengdu, China
| | - Geoffrey Liu
- Medical Oncology and Medical Biophysics, Princess Margaret Cancer Centre, Toronto, Ontario, Canada.,Medicine and Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Tao Lu
- Department of Otolaryngology and Head Neck Surgery, The First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Xiujuan Zheng
- Department of Automation, College of Electrical Engineering and Information Technology, Sichuan University, Chengdu, China
| | - Yu Zhao
- Department of Otorhinolaryngology, West China Hospital, West China Medical School, Sichuan University, Chengdu, China
| |
Collapse
|
343
|
Min JK, Kwak MS, Cha JM. Overview of Deep Learning in Gastrointestinal Endoscopy. Gut Liver 2020; 13:388-393. [PMID: 30630221 PMCID: PMC6622562 DOI: 10.5009/gnl18384] [Citation(s) in RCA: 98] [Impact Index Per Article: 24.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 09/22/2018] [Accepted: 10/01/2018] [Indexed: 12/20/2022] Open
Abstract
Artificial intelligence is likely to perform several roles currently performed by humans, and the adoption of artificial intelligence-based medicine in gastroenterology practice is expected in the near future. Medical image-based diagnoses, such as pathology, radiology, and endoscopy, are expected to be the first in the medical field to be affected by artificial intelligence. A convolutional neural network, a kind of deep-learning method with multilayer perceptrons designed to use minimal preprocessing, was recently reported as being highly beneficial in the field of endoscopy, including esophagogastroduodenoscopy, colonoscopy, and capsule endoscopy. A convolutional neural network-based diagnostic program was challenged to recognize anatomical locations in esophagogastroduodenoscopy images, Helicobacter pylori infection, and gastric cancer for esophagogastroduodenoscopy; to detect and classify colorectal polyps; to recognize celiac disease and hookworm; and to perform small intestine motility characterization of capsule endoscopy images. Artificial intelligence is expected to help endoscopists provide a more accurate diagnosis by automatically detecting and classifying lesions; therefore, it is essential that endoscopists focus on this novel technology. In this review, we describe the effects of artificial intelligence on gastroenterology with a special focus on automatic diagnosis, based on endoscopic findings.
Collapse
Affiliation(s)
- Jun Ki Min
- Department of Internal Medicine, Kyung Hee University School of Medicine, Seoul, Korea
| | - Min Seob Kwak
- Department of Internal Medicine, Kyung Hee University School of Medicine, Seoul, Korea
| | - Jae Myung Cha
- Department of Internal Medicine, Kyung Hee University School of Medicine, Seoul, Korea
| |
Collapse
|
344
|
Endoscopic detection and differentiation of esophageal lesions using a deep neural network. Gastrointest Endosc 2020; 91:301-309.e1. [PMID: 31585124 DOI: 10.1016/j.gie.2019.09.034] [Citation(s) in RCA: 72] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/11/2019] [Accepted: 09/21/2019] [Indexed: 02/08/2023]
Abstract
BACKGROUND AND AIMS Diagnosing esophageal squamous cell carcinoma (SCC) depends on individual physician expertise and may be subject to interobserver variability. Therefore, we developed a computerized image-analysis system to detect and differentiate esophageal SCC. METHODS A total of 9591 nonmagnified endoscopy (non-ME) and 7844 ME images of pathologically confirmed superficial esophageal SCCs and 1692 non-ME and 3435 ME images from noncancerous lesions or normal esophagus were used as training image data. Validation was performed using 255 non-ME white-light images, 268 non-ME narrow-band images/blue-laser images, and 204 ME narrow-band images/blue-laser images from 135 patients. The same validation test data were diagnosed by 15 board-certified specialists (experienced endoscopists). RESULTS Regarding diagnosis by non-ME with narrow-band imaging/blue-laser imaging, the sensitivity, specificity, and accuracy were 100%, 63%, and 77%, respectively, for the artificial intelligence (AI) system and 92%, 69%, and 78%, respectively, for the experienced endoscopists. Regarding diagnosis by non-ME with white-light imaging, the sensitivity, specificity, and accuracy were 90%, 76%, and 81%, respectively, for the AI system and 87%, 67%, and 75%, respectively, for the experienced endoscopists. Regarding diagnosis by ME, the sensitivity, specificity, and accuracy were 98%, 56%, and 77%, respectively, for the AI system and 83%, 70%, and 76%, respectively, for the experienced endoscopists. There was no significant difference in the diagnostic performance between the AI system and the experienced endoscopists. CONCLUSIONS Our AI system showed high sensitivity for detecting SCC by non-ME and high accuracy for differentiating SCC from noncancerous lesions by ME.
Collapse
|
345
|
Lui TK, Wong KK, Mak LL, To EW, Tsui VW, Deng Z, Guo J, Ni L, Cheung MK, Leung WK. Feedback from artificial intelligence improved the learning of junior endoscopists on histology prediction of gastric lesions. Endosc Int Open 2020; 8:E139-E146. [PMID: 32010746 PMCID: PMC6976335 DOI: 10.1055/a-1036-6114] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Accepted: 10/09/2019] [Indexed: 12/12/2022] Open
Abstract
Background and study aims Artificial intelligence (AI)-assisted image classification has been shown to have high accuracy on endoscopic diagnosis. We evaluated the potential effects of use of an AI-assisted image classifier on training of junior endoscopists for histological prediction of gastric lesions. Methods An AI image classifier was built on a convolutional neural network with five convolutional layers and three fully connected layers A Resnet backbone was trained by 2,000 non-magnified endoscopic gastric images. The independent validation set consisted of another 1,000 endoscopic images from 100 gastric lesions. The first part of the validation set was reviewed by six junior endoscopists and the prediction of AI was then disclosed to three of them (Group A) while the remaining three (Group B) were not provided this information. All endoscopists reviewed the second part of the validation set independently. Results The overall accuracy of AI was 91.0 % (95 % CI: 89.2-92.7 %) with 97.1 % sensitivity (95 % CI: 95.6-98.7%), 85.9 % specificity (95 % CI: 83.0-88.4 %) and 0.91 area under the ROC (AUROC) (95 % CI: 0.89-0.93). AI was superior to all junior endoscopists in accuracy and AUROC in both validation sets. The performance of Group A endoscopists but not Group B endoscopists improved on the second validation set (accuracy 69.3 % to 74.7 %; P = 0.003). Conclusion The trained AI image classifier can accurately predict presence of neoplastic component of gastric lesions. Feedback from the AI image classifier can also hasten the learning curve of junior endoscopists in predicting histology of gastric lesions.
Collapse
Affiliation(s)
- Thomas K.L. Lui
- Department of Medicine, Queen Mary Hospital, University of Hong Kong, Hong Kong, China
| | - Kenneth K.Y. Wong
- Department of Computer Science, University of Hong Kong, Hong Kong, China
| | - Loey L.Y. Mak
- Department of Medicine, Queen Mary Hospital, University of Hong Kong, Hong Kong, China
| | - Elvis W.P. To
- Department of Medicine, Queen Mary Hospital, University of Hong Kong, Hong Kong, China
| | - Vivien W.M. Tsui
- Department of Medicine, Queen Mary Hospital, University of Hong Kong, Hong Kong, China
| | - Zijie Deng
- Department of Medicine, University of Hong Kong-Shenzhen Hospital, Shenzhen, China
| | - Jiaqi Guo
- Department of Medicine, University of Hong Kong-Shenzhen Hospital, Shenzhen, China
| | - Li Ni
- Department of Medicine, University of Hong Kong-Shenzhen Hospital, Shenzhen, China
| | - Michael K.S. Cheung
- Department of Medicine, Queen Mary Hospital, University of Hong Kong, Hong Kong, China,Department of Medicine, University of Hong Kong-Shenzhen Hospital, Shenzhen, China
| | - Wai K. Leung
- Department of Medicine, Queen Mary Hospital, University of Hong Kong, Hong Kong, China,Corresponding author Wai K. Leung Department of MedicineQueen Mary HospitalUniversity of Hong KongHong KongChina+852 2816 2863
| |
Collapse
|
346
|
Teh JL, Shabbir A, Yuen S, So JBY. Recent advances in diagnostic upper endoscopy. World J Gastroenterol 2020; 26:433-447. [PMID: 32063692 PMCID: PMC7002908 DOI: 10.3748/wjg.v26.i4.433] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Revised: 01/10/2020] [Accepted: 01/14/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Esophageo-gastro-duodenoscopy (EGD) is an important procedure used for detection and diagnosis of esophago-gastric lesions. There exists no consensus on the technique of examination.
AIM To identify recent advances in diagnostic EGDs to improve diagnostic yield.
METHODS We queried the PubMed database for relevant articles published between January 2001 and August 2019 as well as hand searched references from recently published endoscopy guidelines. Keywords used included free text and MeSH terms addressing quality indicators and technological innovations in EGDs. Factors affecting diagnostic yield and EGD quality were identified and divided into the follow segments: Pre endoscopy preparation, sedation, examination schema, examination time, routine biopsy, image enhanced endoscopy and future developments.
RESULTS We identified 120 relevant abstracts of which we utilized 67 of these studies in our review. Adequate pre-endoscopy preparation with simethicone and pronase increases gastric visibility. Proper sedation, especially with propofol, increases patient satisfaction after procedure and may improve detection of superficial gastrointestinal lesions. There is a movement towards mandatory picture documentation during EGD as well as dedicating sufficient time for examination improves diagnostic yield. The use of image enhanced endoscopy and magnifying endoscopy improves detection of squamous cell carcinoma and gastric neoplasm. The magnifying endoscopy simple diagnostic algorithm is useful for diagnosis of early gastric cancer.
CONCLUSION There is a steady momentum in the past decade towards improving diagnostic yield, quality and reporting in EGDs. Other interesting innovations, such as Raman spectroscopy, endocytoscopy and artificial intelligence may have widespread endoscopic applications in the near future.
Collapse
Affiliation(s)
- Jun-Liang Teh
- Department of Surgery, National University Hospital System, Singapore 119228, Singapore
- Department of Surgery, Jurong Health Campus, National University Health System, Singapore 609606, Singapore
| | - Asim Shabbir
- Department of Surgery, National University Hospital System, Singapore 119228, Singapore
| | - Soon Yuen
- Department of Surgery, National University Hospital System, Singapore 119228, Singapore
- Department of Surgery, Jurong Health Campus, National University Health System, Singapore 609606, Singapore
| | - Jimmy Bok-Yan So
- Department of Surgery, National University Hospital System, Singapore 119228, Singapore
- Department of Surgery, National University of Singapore, Singapore 119074, Singapore
| |
Collapse
|
347
|
Ikeda A, Nosato H, Kochi Y, Kojima T, Kawai K, Sakanashi H, Murakawa M, Nishiyama H. Support System of Cystoscopic Diagnosis for Bladder Cancer Based on Artificial Intelligence. J Endourol 2020; 34:352-358. [PMID: 31808367 PMCID: PMC7099426 DOI: 10.1089/end.2019.0509] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
Introduction: Nonmuscle-invasive bladder cancer has a relatively high postoperative recurrence rate despite the implementation of conventional treatment methods. Cystoscopy is essential for diagnosing and monitoring bladder cancer, but lesions are overlooked while using white-light imaging. Using cystoscopy, tumors with a small diameter; flat tumors, such as carcinoma in situ; and the extent of flat lesions associated with the elevated lesions are difficult to identify. In addition, the accuracy of diagnosis and treatment using cystoscopy varies according to the skill and experience of physicians. Therefore, to improve the quality of bladder cancer diagnosis, we aimed to support the cystoscopic diagnosis of bladder cancer using artificial intelligence (AI). Materials and Methods: A total of 2102 cystoscopic images, consisting of 1671 images of normal tissue and 431 images of tumor lesions, were used to create a dataset with an 8:2 ratio of training and test images. We constructed a tumor classifier based on a convolutional neural network (CNN). The performance of the trained classifier was evaluated using test data. True-positive rate and false-positive rate were plotted when the threshold was changed as the receiver operating characteristic (ROC) curve. Results: In the test data (tumor image: 87, normal image: 335), 78 images were true positive, 315 true negative, 20 false positive, and 9 false negative. The area under the ROC curve was 0.98, with a maximum Youden index of 0.837, sensitivity of 89.7%, and specificity of 94.0%. Conclusion: By objectively evaluating the cystoscopic image with CNN, it was possible to classify the image, including tumor lesions and normality. The objective evaluation of cystoscopic images using AI is expected to contribute to improvement in the accuracy of the diagnosis and treatment of bladder cancer.
Collapse
Affiliation(s)
- Atsushi Ikeda
- Department of Urology, University of Tsukuba Hospital, Tsukuba, Japan
| | - Hirokazu Nosato
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan
| | - Yuta Kochi
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan.,Department of Intelligent Interaction Technologies, Graduate School of System and Information Engineering, University of Tsukuba, Tsukuba, Japan
| | - Takahiro Kojima
- Department of Urology, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
| | - Koji Kawai
- Department of Urology, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
| | - Hidenori Sakanashi
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan.,Department of Intelligent Interaction Technologies, Graduate School of System and Information Engineering, University of Tsukuba, Tsukuba, Japan
| | - Masahiro Murakawa
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan.,Department of Intelligent Interaction Technologies, Graduate School of System and Information Engineering, University of Tsukuba, Tsukuba, Japan
| | - Hiroyuki Nishiyama
- Department of Urology, University of Tsukuba Hospital, Tsukuba, Japan.,Department of Urology, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
| |
Collapse
|
348
|
Lee JH, Han IH, Kim DH, Yu S, Lee IS, Song YS, Joo S, Jin CB, Kim H. Spine Computed Tomography to Magnetic Resonance Image Synthesis Using Generative Adversarial Networks : A Preliminary Study. J Korean Neurosurg Soc 2020; 63:386-396. [PMID: 31931556 PMCID: PMC7218205 DOI: 10.3340/jkns.2019.0084] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Accepted: 06/11/2019] [Indexed: 02/06/2023] Open
Abstract
OBJECTIVE To generate synthetic spine magnetic resonance (MR) images from spine computed tomography (CT) using generative adversarial networks (GANs), as well as to determine the similarities between synthesized and real MR images. METHODS GANs were trained to transform spine CT image slices into spine magnetic resonance T2 weighted (MRT2) axial image slices by combining adversarial loss and voxel-wise loss. Experiments were performed using 280 pairs of lumbar spine CT scans and MRT2 images. The MRT2 images were then synthesized from 15 other spine CT scans. To evaluate whether the synthetic MR images were realistic, two radiologists, two spine surgeons, and two residents blindly classified the real and synthetic MRT2 images. Two experienced radiologists then evaluated the similarities between subdivisions of the real and synthetic MRT2 images. Quantitative analysis of the synthetic MRT2 images was performed using the mean absolute error (MAE) and peak signal-to-noise ratio (PSNR). RESULTS The mean overall similarity of the synthetic MRT2 images evaluated by radiologists was 80.2%. In the blind classification of the real MRT2 images, the failure rate ranged from 0% to 40%. The MAE value of each image ranged from 13.75 to 34.24 pixels (mean, 21.19 pixels), and the PSNR of each image ranged from 61.96 to 68.16 dB (mean, 64.92 dB). CONCLUSION This was the first study to apply GANs to synthesize spine MR images from CT images. Despite the small dataset of 280 pairs, the synthetic MR images were relatively well implemented. Synthesis of medical images using GANs is a new paradigm of artificial intelligence application in medical imaging. We expect that synthesis of MR images from spine CT images using GANs will improve the diagnostic usefulness of CT. To better inform the clinical applications of this technique, further studies are needed involving a large dataset, a variety of pathologies, and other MR sequence of the lumbar spine.
Collapse
Affiliation(s)
- Jung Hwan Lee
- Department of Neurosurgery, Pusan National University Hospital, Busan, Korea
| | - In Ho Han
- Department of Neurosurgery, Pusan National University Hospital, Busan, Korea
| | - Dong Hwan Kim
- Department of Neurosurgery, Pusan National University Hospital, Busan, Korea
| | - Seunghan Yu
- Department of Neurosurgery, Pusan National University Hospital, Busan, Korea
| | - In Sook Lee
- Department of Radiology, Pusan National University Hospital, Busan, Korea
| | - You Seon Song
- Department of Radiology, Pusan National University Hospital, Busan, Korea
| | | | - Cheng-Bin Jin
- School of Information and Communication Engineering, Inha University, Incheon, Korea
| | - Hakil Kim
- School of Information and Communication Engineering, Inha University, Incheon, Korea
| |
Collapse
|
349
|
Gulati S, Emmanuel A, Patel M, Williams S, Haji A, Hayee B, Neumann H. Artificial intelligence in luminal endoscopy. Ther Adv Gastrointest Endosc 2020; 13:2631774520935220. [PMID: 32637935 PMCID: PMC7315657 DOI: 10.1177/2631774520935220] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 05/22/2020] [Indexed: 12/15/2022] Open
Abstract
Artificial intelligence is a strong focus of interest for global health development. Diagnostic endoscopy is an attractive substrate for artificial intelligence with a real potential to improve patient care through standardisation of endoscopic diagnosis and to serve as an adjunct to enhanced imaging diagnosis. The possibility to amass large data to refine algorithms makes adoption of artificial intelligence into global practice a potential reality. Initial studies in luminal endoscopy involve machine learning and are retrospective. Improvement in diagnostic performance is appreciable through the adoption of deep learning. Research foci in the upper gastrointestinal tract include the diagnosis of neoplasia, including Barrett's, squamous cell and gastric where prospective and real-time artificial intelligence studies have been completed demonstrating a benefit of artificial intelligence-augmented endoscopy. Deep learning applied to small bowel capsule endoscopy also appears to enhance pathology detection and reduce capsule reading time. Prospective evaluation including the first randomised trial has been performed in the colon, demonstrating improved polyp and adenoma detection rates; however, these appear to be relevant to small polyps. There are potential additional roles of artificial intelligence relevant to improving the quality of endoscopic examinations, training and triaging of referrals. Further large-scale, multicentre and cross-platform validation studies are required for the robust incorporation of artificial intelligence-augmented diagnostic luminal endoscopy into our routine clinical practice.
Collapse
Affiliation(s)
- Shraddha Gulati
- King’s Institute of Therapeutic Endoscopy, King’s College Hospital NHS Foundation Trust, London, UK
| | - Andrew Emmanuel
- King’s Institute of Therapeutic Endoscopy, King’s College Hospital NHS Foundation Trust, London, UK
| | - Mehul Patel
- King’s Institute of Therapeutic Endoscopy, King’s College Hospital NHS Foundation Trust, London, UK
| | - Sophie Williams
- King’s Institute of Therapeutic Endoscopy, King’s College Hospital NHS Foundation Trust, London, UK
| | - Amyn Haji
- King’s Institute of Therapeutic Endoscopy, King’s College Hospital NHS Foundation Trust, London, UK
| | - Bu’Hussain Hayee
- King’s Institute of Therapeutic Endoscopy, King’s College Hospital NHS Foundation Trust, London, UK
| | - Helmut Neumann
- Department of Interdisciplinary Endoscopy, University Hospital Mainz, 55131 Mainz, Germany
| |
Collapse
|
350
|
Guo L, Xiao X, Wu C, Zeng X, Zhang Y, Du J, Bai S, Xie J, Zhang Z, Li Y, Wang X, Cheung O, Sharma M, Liu J, Hu B. Real-time automated diagnosis of precancerous lesions and early esophageal squamous cell carcinoma using a deep learning model (with videos). Gastrointest Endosc 2020; 91:41-51. [PMID: 31445040 DOI: 10.1016/j.gie.2019.08.018] [Citation(s) in RCA: 116] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 08/08/2019] [Indexed: 02/07/2023]
Abstract
BACKGROUND AND AIMS We developed a system for computer-assisted diagnosis (CAD) for real-time automated diagnosis of precancerous lesions and early esophageal squamous cell carcinomas (ESCCs) to assist the diagnosis of esophageal cancer. METHODS A total of 6473 narrow-band imaging (NBI) images, including precancerous lesions, early ESCCs, and noncancerous lesions, were used to train the CAD system. We validated the CAD system using both endoscopic images and video datasets. The receiver operating characteristic curve of the CAD system was generated based on image datasets. An artificial intelligence probability heat map was generated for each input of endoscopic images. The yellow color indicated high possibility of cancerous lesion, and the blue color indicated noncancerous lesions on the probability heat map. When the CAD system detected any precancerous lesion or early ESCCs, the lesion of interest was masked with color. RESULTS The image datasets contained 1480 malignant NBI images from 59 consecutive cancerous cases (sensitivity, 98.04%) and 5191 noncancerous NBI images from 2004 cases (specificity, 95.03%). The area under curve was 0.989. The video datasets of precancerous lesions or early ESCCs included 27 nonmagnifying videos (per-frame sensitivity 60.8%, per-lesion sensitivity, 100%) and 20 magnifying videos (per-frame sensitivity 96.1%, per-lesion sensitivity, 100%). Unaltered full-range normal esophagus videos included 33 videos (per-frame specificity 99.9%, per-case specificity, 90.9%). CONCLUSIONS A deep learning model demonstrated high sensitivity and specificity for both endoscopic images and video datasets. The real-time CAD system has a promising potential in the near future to assist endoscopists in diagnosing precancerous lesions and ESCCs.
Collapse
Affiliation(s)
- LinJie Guo
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Xiao Xiao
- Shanghai Wision AI Co Ltd, Shanghai, China
| | - ChunCheng Wu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Xianhui Zeng
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Yuhang Zhang
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Jiang Du
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Shuai Bai
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Jia Xie
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | | | - Yuhong Li
- Shanghai Wision AI Co Ltd, Shanghai, China
| | | | - Onpan Cheung
- San Bernardino Gastroenterology Associates Inc and ACE Endoscopy and Surgery Center, Rialto, California, USA
| | | | | | - Bing Hu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|